Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataflow/v1b3.Job
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a Cloud Dataflow job. To create a job, we recommend using projects.locations.jobs.create with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.create is not recommended, as your job will always start in us-central1. Do not enter confidential information when you supply string values using the API.
Note - this resource’s API doesn’t support deletion. When deleted, the resource will persist
on Google Cloud even though it will be deleted from Pulumi state.
Create Job Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);@overload
def Job(resource_name: str,
        args: Optional[JobArgs] = None,
        opts: Optional[ResourceOptions] = None)
@overload
def Job(resource_name: str,
        opts: Optional[ResourceOptions] = None,
        client_request_id: Optional[str] = None,
        create_time: Optional[str] = None,
        created_from_snapshot_id: Optional[str] = None,
        current_state: Optional[JobCurrentState] = None,
        current_state_time: Optional[str] = None,
        environment: Optional[EnvironmentArgs] = None,
        execution_info: Optional[JobExecutionInfoArgs] = None,
        id: Optional[str] = None,
        job_metadata: Optional[JobMetadataArgs] = None,
        labels: Optional[Mapping[str, str]] = None,
        location: Optional[str] = None,
        name: Optional[str] = None,
        pipeline_description: Optional[PipelineDescriptionArgs] = None,
        project: Optional[str] = None,
        replace_job_id: Optional[str] = None,
        replaced_by_job_id: Optional[str] = None,
        requested_state: Optional[JobRequestedState] = None,
        runtime_updatable_params: Optional[RuntimeUpdatableParamsArgs] = None,
        satisfies_pzs: Optional[bool] = None,
        stage_states: Optional[Sequence[ExecutionStageStateArgs]] = None,
        start_time: Optional[str] = None,
        steps: Optional[Sequence[StepArgs]] = None,
        steps_location: Optional[str] = None,
        temp_files: Optional[Sequence[str]] = None,
        transform_name_mapping: Optional[Mapping[str, str]] = None,
        type: Optional[JobType] = None,
        view: Optional[str] = None)func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)type: google-native:dataflow/v1b3:Job
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var examplejobResourceResourceFromDataflowv1b3 = new GoogleNative.Dataflow.V1b3.Job("examplejobResourceResourceFromDataflowv1b3", new()
{
    ClientRequestId = "string",
    CreateTime = "string",
    CreatedFromSnapshotId = "string",
    CurrentState = GoogleNative.Dataflow.V1b3.JobCurrentState.JobStateUnknown,
    CurrentStateTime = "string",
    Environment = new GoogleNative.Dataflow.V1b3.Inputs.EnvironmentArgs
    {
        ClusterManagerApiService = "string",
        Dataset = "string",
        DebugOptions = new GoogleNative.Dataflow.V1b3.Inputs.DebugOptionsArgs
        {
            DataSampling = new GoogleNative.Dataflow.V1b3.Inputs.DataSamplingConfigArgs
            {
                Behaviors = new[]
                {
                    GoogleNative.Dataflow.V1b3.DataSamplingConfigBehaviorsItem.DataSamplingBehaviorUnspecified,
                },
            },
            EnableHotKeyLogging = false,
        },
        Experiments = new[]
        {
            "string",
        },
        FlexResourceSchedulingGoal = GoogleNative.Dataflow.V1b3.EnvironmentFlexResourceSchedulingGoal.FlexrsUnspecified,
        InternalExperiments = 
        {
            { "string", "string" },
        },
        SdkPipelineOptions = 
        {
            { "string", "string" },
        },
        ServiceAccountEmail = "string",
        ServiceKmsKeyName = "string",
        ServiceOptions = new[]
        {
            "string",
        },
        TempStoragePrefix = "string",
        UserAgent = 
        {
            { "string", "string" },
        },
        Version = 
        {
            { "string", "string" },
        },
        WorkerPools = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.WorkerPoolArgs
            {
                Network = "string",
                DiskType = "string",
                NumThreadsPerWorker = 0,
                OnHostMaintenance = "string",
                NumWorkers = 0,
                IpConfiguration = GoogleNative.Dataflow.V1b3.WorkerPoolIpConfiguration.WorkerIpUnspecified,
                Kind = "string",
                MachineType = "string",
                Metadata = 
                {
                    { "string", "string" },
                },
                AutoscalingSettings = new GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettingsArgs
                {
                    Algorithm = GoogleNative.Dataflow.V1b3.AutoscalingSettingsAlgorithm.AutoscalingAlgorithmUnknown,
                    MaxNumWorkers = 0,
                },
                DiskSizeGb = 0,
                DefaultPackageSet = GoogleNative.Dataflow.V1b3.WorkerPoolDefaultPackageSet.DefaultPackageSetUnknown,
                DiskSourceImage = "string",
                Packages = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.PackageArgs
                    {
                        Location = "string",
                        Name = "string",
                    },
                },
                PoolArgs = 
                {
                    { "string", "string" },
                },
                SdkHarnessContainerImages = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImageArgs
                    {
                        Capabilities = new[]
                        {
                            "string",
                        },
                        ContainerImage = "string",
                        EnvironmentId = "string",
                        UseSingleCorePerContainer = false,
                    },
                },
                Subnetwork = "string",
                TaskrunnerSettings = new GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettingsArgs
                {
                    Alsologtostderr = false,
                    BaseTaskDir = "string",
                    BaseUrl = "string",
                    CommandlinesFileName = "string",
                    ContinueOnException = false,
                    DataflowApiVersion = "string",
                    HarnessCommand = "string",
                    LanguageHint = "string",
                    LogDir = "string",
                    LogToSerialconsole = false,
                    LogUploadLocation = "string",
                    OauthScopes = new[]
                    {
                        "string",
                    },
                    ParallelWorkerSettings = new GoogleNative.Dataflow.V1b3.Inputs.WorkerSettingsArgs
                    {
                        BaseUrl = "string",
                        ReportingEnabled = false,
                        ServicePath = "string",
                        ShuffleServicePath = "string",
                        TempStoragePrefix = "string",
                        WorkerId = "string",
                    },
                    StreamingWorkerMainClass = "string",
                    TaskGroup = "string",
                    TaskUser = "string",
                    TempStoragePrefix = "string",
                    VmId = "string",
                    WorkflowFileName = "string",
                },
                TeardownPolicy = GoogleNative.Dataflow.V1b3.WorkerPoolTeardownPolicy.TeardownPolicyUnknown,
                DataDisks = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.DiskArgs
                    {
                        DiskType = "string",
                        MountPoint = "string",
                        SizeGb = 0,
                    },
                },
                Zone = "string",
            },
        },
        WorkerRegion = "string",
        WorkerZone = "string",
    },
    Id = "string",
    JobMetadata = new GoogleNative.Dataflow.V1b3.Inputs.JobMetadataArgs
    {
        BigTableDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetailsArgs
            {
                InstanceId = "string",
                Project = "string",
                TableId = "string",
            },
        },
        BigqueryDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetailsArgs
            {
                Dataset = "string",
                Project = "string",
                Query = "string",
                Table = "string",
            },
        },
        DatastoreDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetailsArgs
            {
                Namespace = "string",
                Project = "string",
            },
        },
        FileDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.FileIODetailsArgs
            {
                FilePattern = "string",
            },
        },
        PubsubDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetailsArgs
            {
                Subscription = "string",
                Topic = "string",
            },
        },
        SdkVersion = new GoogleNative.Dataflow.V1b3.Inputs.SdkVersionArgs
        {
            SdkSupportStatus = GoogleNative.Dataflow.V1b3.SdkVersionSdkSupportStatus.Unknown,
            Version = "string",
            VersionDisplayName = "string",
        },
        SpannerDetails = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetailsArgs
            {
                DatabaseId = "string",
                InstanceId = "string",
                Project = "string",
            },
        },
        UserDisplayProperties = 
        {
            { "string", "string" },
        },
    },
    Labels = 
    {
        { "string", "string" },
    },
    Location = "string",
    Name = "string",
    PipelineDescription = new GoogleNative.Dataflow.V1b3.Inputs.PipelineDescriptionArgs
    {
        DisplayData = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.DisplayDataArgs
            {
                BoolValue = false,
                DurationValue = "string",
                FloatValue = 0,
                Int64Value = "string",
                JavaClassValue = "string",
                Key = "string",
                Label = "string",
                Namespace = "string",
                ShortStrValue = "string",
                StrValue = "string",
                TimestampValue = "string",
                Url = "string",
            },
        },
        ExecutionPipelineStage = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummaryArgs
            {
                ComponentSource = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.ComponentSourceArgs
                    {
                        Name = "string",
                        OriginalTransformOrCollection = "string",
                        UserName = "string",
                    },
                },
                ComponentTransform = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.ComponentTransformArgs
                    {
                        Name = "string",
                        OriginalTransform = "string",
                        UserName = "string",
                    },
                },
                Id = "string",
                InputSource = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.StageSourceArgs
                    {
                        Name = "string",
                        OriginalTransformOrCollection = "string",
                        SizeBytes = "string",
                        UserName = "string",
                    },
                },
                Kind = GoogleNative.Dataflow.V1b3.ExecutionStageSummaryKind.UnknownKind,
                Name = "string",
                OutputSource = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.StageSourceArgs
                    {
                        Name = "string",
                        OriginalTransformOrCollection = "string",
                        SizeBytes = "string",
                        UserName = "string",
                    },
                },
                PrerequisiteStage = new[]
                {
                    "string",
                },
            },
        },
        OriginalPipelineTransform = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.TransformSummaryArgs
            {
                DisplayData = new[]
                {
                    new GoogleNative.Dataflow.V1b3.Inputs.DisplayDataArgs
                    {
                        BoolValue = false,
                        DurationValue = "string",
                        FloatValue = 0,
                        Int64Value = "string",
                        JavaClassValue = "string",
                        Key = "string",
                        Label = "string",
                        Namespace = "string",
                        ShortStrValue = "string",
                        StrValue = "string",
                        TimestampValue = "string",
                        Url = "string",
                    },
                },
                Id = "string",
                InputCollectionName = new[]
                {
                    "string",
                },
                Kind = GoogleNative.Dataflow.V1b3.TransformSummaryKind.UnknownKind,
                Name = "string",
                OutputCollectionName = new[]
                {
                    "string",
                },
            },
        },
        StepNamesHash = "string",
    },
    Project = "string",
    ReplaceJobId = "string",
    ReplacedByJobId = "string",
    RequestedState = GoogleNative.Dataflow.V1b3.JobRequestedState.JobStateUnknown,
    RuntimeUpdatableParams = new GoogleNative.Dataflow.V1b3.Inputs.RuntimeUpdatableParamsArgs
    {
        MaxNumWorkers = 0,
        MinNumWorkers = 0,
    },
    SatisfiesPzs = false,
    StageStates = new[]
    {
        new GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageStateArgs
        {
            CurrentStateTime = "string",
            ExecutionStageName = "string",
            ExecutionStageState = GoogleNative.Dataflow.V1b3.ExecutionStageStateExecutionStageState.JobStateUnknown,
        },
    },
    StartTime = "string",
    Steps = new[]
    {
        new GoogleNative.Dataflow.V1b3.Inputs.StepArgs
        {
            Kind = "string",
            Name = "string",
            Properties = 
            {
                { "string", "string" },
            },
        },
    },
    StepsLocation = "string",
    TempFiles = new[]
    {
        "string",
    },
    TransformNameMapping = 
    {
        { "string", "string" },
    },
    Type = GoogleNative.Dataflow.V1b3.JobType.JobTypeUnknown,
    View = "string",
});
example, err := dataflow.NewJob(ctx, "examplejobResourceResourceFromDataflowv1b3", &dataflow.JobArgs{
	ClientRequestId:       pulumi.String("string"),
	CreateTime:            pulumi.String("string"),
	CreatedFromSnapshotId: pulumi.String("string"),
	CurrentState:          dataflow.JobCurrentStateJobStateUnknown,
	CurrentStateTime:      pulumi.String("string"),
	Environment: &dataflow.EnvironmentArgs{
		ClusterManagerApiService: pulumi.String("string"),
		Dataset:                  pulumi.String("string"),
		DebugOptions: &dataflow.DebugOptionsArgs{
			DataSampling: &dataflow.DataSamplingConfigArgs{
				Behaviors: dataflow.DataSamplingConfigBehaviorsItemArray{
					dataflow.DataSamplingConfigBehaviorsItemDataSamplingBehaviorUnspecified,
				},
			},
			EnableHotKeyLogging: pulumi.Bool(false),
		},
		Experiments: pulumi.StringArray{
			pulumi.String("string"),
		},
		FlexResourceSchedulingGoal: dataflow.EnvironmentFlexResourceSchedulingGoalFlexrsUnspecified,
		InternalExperiments: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		SdkPipelineOptions: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		ServiceAccountEmail: pulumi.String("string"),
		ServiceKmsKeyName:   pulumi.String("string"),
		ServiceOptions: pulumi.StringArray{
			pulumi.String("string"),
		},
		TempStoragePrefix: pulumi.String("string"),
		UserAgent: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		Version: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		WorkerPools: dataflow.WorkerPoolArray{
			&dataflow.WorkerPoolArgs{
				Network:             pulumi.String("string"),
				DiskType:            pulumi.String("string"),
				NumThreadsPerWorker: pulumi.Int(0),
				OnHostMaintenance:   pulumi.String("string"),
				NumWorkers:          pulumi.Int(0),
				IpConfiguration:     dataflow.WorkerPoolIpConfigurationWorkerIpUnspecified,
				Kind:                pulumi.String("string"),
				MachineType:         pulumi.String("string"),
				Metadata: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				AutoscalingSettings: &dataflow.AutoscalingSettingsArgs{
					Algorithm:     dataflow.AutoscalingSettingsAlgorithmAutoscalingAlgorithmUnknown,
					MaxNumWorkers: pulumi.Int(0),
				},
				DiskSizeGb:        pulumi.Int(0),
				DefaultPackageSet: dataflow.WorkerPoolDefaultPackageSetDefaultPackageSetUnknown,
				DiskSourceImage:   pulumi.String("string"),
				Packages: dataflow.PackageArray{
					&dataflow.PackageArgs{
						Location: pulumi.String("string"),
						Name:     pulumi.String("string"),
					},
				},
				PoolArgs: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				SdkHarnessContainerImages: dataflow.SdkHarnessContainerImageArray{
					&dataflow.SdkHarnessContainerImageArgs{
						Capabilities: pulumi.StringArray{
							pulumi.String("string"),
						},
						ContainerImage:            pulumi.String("string"),
						EnvironmentId:             pulumi.String("string"),
						UseSingleCorePerContainer: pulumi.Bool(false),
					},
				},
				Subnetwork: pulumi.String("string"),
				TaskrunnerSettings: &dataflow.TaskRunnerSettingsArgs{
					Alsologtostderr:      pulumi.Bool(false),
					BaseTaskDir:          pulumi.String("string"),
					BaseUrl:              pulumi.String("string"),
					CommandlinesFileName: pulumi.String("string"),
					ContinueOnException:  pulumi.Bool(false),
					DataflowApiVersion:   pulumi.String("string"),
					HarnessCommand:       pulumi.String("string"),
					LanguageHint:         pulumi.String("string"),
					LogDir:               pulumi.String("string"),
					LogToSerialconsole:   pulumi.Bool(false),
					LogUploadLocation:    pulumi.String("string"),
					OauthScopes: pulumi.StringArray{
						pulumi.String("string"),
					},
					ParallelWorkerSettings: &dataflow.WorkerSettingsArgs{
						BaseUrl:            pulumi.String("string"),
						ReportingEnabled:   pulumi.Bool(false),
						ServicePath:        pulumi.String("string"),
						ShuffleServicePath: pulumi.String("string"),
						TempStoragePrefix:  pulumi.String("string"),
						WorkerId:           pulumi.String("string"),
					},
					StreamingWorkerMainClass: pulumi.String("string"),
					TaskGroup:                pulumi.String("string"),
					TaskUser:                 pulumi.String("string"),
					TempStoragePrefix:        pulumi.String("string"),
					VmId:                     pulumi.String("string"),
					WorkflowFileName:         pulumi.String("string"),
				},
				TeardownPolicy: dataflow.WorkerPoolTeardownPolicyTeardownPolicyUnknown,
				DataDisks: dataflow.DiskArray{
					&dataflow.DiskArgs{
						DiskType:   pulumi.String("string"),
						MountPoint: pulumi.String("string"),
						SizeGb:     pulumi.Int(0),
					},
				},
				Zone: pulumi.String("string"),
			},
		},
		WorkerRegion: pulumi.String("string"),
		WorkerZone:   pulumi.String("string"),
	},
	Id: pulumi.String("string"),
	JobMetadata: &dataflow.JobMetadataArgs{
		BigTableDetails: dataflow.BigTableIODetailsArray{
			&dataflow.BigTableIODetailsArgs{
				InstanceId: pulumi.String("string"),
				Project:    pulumi.String("string"),
				TableId:    pulumi.String("string"),
			},
		},
		BigqueryDetails: dataflow.BigQueryIODetailsArray{
			&dataflow.BigQueryIODetailsArgs{
				Dataset: pulumi.String("string"),
				Project: pulumi.String("string"),
				Query:   pulumi.String("string"),
				Table:   pulumi.String("string"),
			},
		},
		DatastoreDetails: dataflow.DatastoreIODetailsArray{
			&dataflow.DatastoreIODetailsArgs{
				Namespace: pulumi.String("string"),
				Project:   pulumi.String("string"),
			},
		},
		FileDetails: dataflow.FileIODetailsArray{
			&dataflow.FileIODetailsArgs{
				FilePattern: pulumi.String("string"),
			},
		},
		PubsubDetails: dataflow.PubSubIODetailsArray{
			&dataflow.PubSubIODetailsArgs{
				Subscription: pulumi.String("string"),
				Topic:        pulumi.String("string"),
			},
		},
		SdkVersion: &dataflow.SdkVersionArgs{
			SdkSupportStatus:   dataflow.SdkVersionSdkSupportStatusUnknown,
			Version:            pulumi.String("string"),
			VersionDisplayName: pulumi.String("string"),
		},
		SpannerDetails: dataflow.SpannerIODetailsArray{
			&dataflow.SpannerIODetailsArgs{
				DatabaseId: pulumi.String("string"),
				InstanceId: pulumi.String("string"),
				Project:    pulumi.String("string"),
			},
		},
		UserDisplayProperties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Location: pulumi.String("string"),
	Name:     pulumi.String("string"),
	PipelineDescription: &dataflow.PipelineDescriptionArgs{
		DisplayData: dataflow.DisplayDataArray{
			&dataflow.DisplayDataArgs{
				BoolValue:      pulumi.Bool(false),
				DurationValue:  pulumi.String("string"),
				FloatValue:     pulumi.Float64(0),
				Int64Value:     pulumi.String("string"),
				JavaClassValue: pulumi.String("string"),
				Key:            pulumi.String("string"),
				Label:          pulumi.String("string"),
				Namespace:      pulumi.String("string"),
				ShortStrValue:  pulumi.String("string"),
				StrValue:       pulumi.String("string"),
				TimestampValue: pulumi.String("string"),
				Url:            pulumi.String("string"),
			},
		},
		ExecutionPipelineStage: dataflow.ExecutionStageSummaryArray{
			&dataflow.ExecutionStageSummaryArgs{
				ComponentSource: dataflow.ComponentSourceArray{
					&dataflow.ComponentSourceArgs{
						Name:                          pulumi.String("string"),
						OriginalTransformOrCollection: pulumi.String("string"),
						UserName:                      pulumi.String("string"),
					},
				},
				ComponentTransform: dataflow.ComponentTransformArray{
					&dataflow.ComponentTransformArgs{
						Name:              pulumi.String("string"),
						OriginalTransform: pulumi.String("string"),
						UserName:          pulumi.String("string"),
					},
				},
				Id: pulumi.String("string"),
				InputSource: dataflow.StageSourceArray{
					&dataflow.StageSourceArgs{
						Name:                          pulumi.String("string"),
						OriginalTransformOrCollection: pulumi.String("string"),
						SizeBytes:                     pulumi.String("string"),
						UserName:                      pulumi.String("string"),
					},
				},
				Kind: dataflow.ExecutionStageSummaryKindUnknownKind,
				Name: pulumi.String("string"),
				OutputSource: dataflow.StageSourceArray{
					&dataflow.StageSourceArgs{
						Name:                          pulumi.String("string"),
						OriginalTransformOrCollection: pulumi.String("string"),
						SizeBytes:                     pulumi.String("string"),
						UserName:                      pulumi.String("string"),
					},
				},
				PrerequisiteStage: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
		},
		OriginalPipelineTransform: dataflow.TransformSummaryArray{
			&dataflow.TransformSummaryArgs{
				DisplayData: dataflow.DisplayDataArray{
					&dataflow.DisplayDataArgs{
						BoolValue:      pulumi.Bool(false),
						DurationValue:  pulumi.String("string"),
						FloatValue:     pulumi.Float64(0),
						Int64Value:     pulumi.String("string"),
						JavaClassValue: pulumi.String("string"),
						Key:            pulumi.String("string"),
						Label:          pulumi.String("string"),
						Namespace:      pulumi.String("string"),
						ShortStrValue:  pulumi.String("string"),
						StrValue:       pulumi.String("string"),
						TimestampValue: pulumi.String("string"),
						Url:            pulumi.String("string"),
					},
				},
				Id: pulumi.String("string"),
				InputCollectionName: pulumi.StringArray{
					pulumi.String("string"),
				},
				Kind: dataflow.TransformSummaryKindUnknownKind,
				Name: pulumi.String("string"),
				OutputCollectionName: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
		},
		StepNamesHash: pulumi.String("string"),
	},
	Project:         pulumi.String("string"),
	ReplaceJobId:    pulumi.String("string"),
	ReplacedByJobId: pulumi.String("string"),
	RequestedState:  dataflow.JobRequestedStateJobStateUnknown,
	RuntimeUpdatableParams: &dataflow.RuntimeUpdatableParamsArgs{
		MaxNumWorkers: pulumi.Int(0),
		MinNumWorkers: pulumi.Int(0),
	},
	SatisfiesPzs: pulumi.Bool(false),
	StageStates: dataflow.ExecutionStageStateArray{
		&dataflow.ExecutionStageStateArgs{
			CurrentStateTime:    pulumi.String("string"),
			ExecutionStageName:  pulumi.String("string"),
			ExecutionStageState: dataflow.ExecutionStageStateExecutionStageStateJobStateUnknown,
		},
	},
	StartTime: pulumi.String("string"),
	Steps: dataflow.StepArray{
		&dataflow.StepArgs{
			Kind: pulumi.String("string"),
			Name: pulumi.String("string"),
			Properties: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
	},
	StepsLocation: pulumi.String("string"),
	TempFiles: pulumi.StringArray{
		pulumi.String("string"),
	},
	TransformNameMapping: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Type: dataflow.JobTypeJobTypeUnknown,
	View: pulumi.String("string"),
})
var examplejobResourceResourceFromDataflowv1b3 = new Job("examplejobResourceResourceFromDataflowv1b3", JobArgs.builder()
    .clientRequestId("string")
    .createTime("string")
    .createdFromSnapshotId("string")
    .currentState("JOB_STATE_UNKNOWN")
    .currentStateTime("string")
    .environment(EnvironmentArgs.builder()
        .clusterManagerApiService("string")
        .dataset("string")
        .debugOptions(DebugOptionsArgs.builder()
            .dataSampling(DataSamplingConfigArgs.builder()
                .behaviors("DATA_SAMPLING_BEHAVIOR_UNSPECIFIED")
                .build())
            .enableHotKeyLogging(false)
            .build())
        .experiments("string")
        .flexResourceSchedulingGoal("FLEXRS_UNSPECIFIED")
        .internalExperiments(Map.of("string", "string"))
        .sdkPipelineOptions(Map.of("string", "string"))
        .serviceAccountEmail("string")
        .serviceKmsKeyName("string")
        .serviceOptions("string")
        .tempStoragePrefix("string")
        .userAgent(Map.of("string", "string"))
        .version(Map.of("string", "string"))
        .workerPools(WorkerPoolArgs.builder()
            .network("string")
            .diskType("string")
            .numThreadsPerWorker(0)
            .onHostMaintenance("string")
            .numWorkers(0)
            .ipConfiguration("WORKER_IP_UNSPECIFIED")
            .kind("string")
            .machineType("string")
            .metadata(Map.of("string", "string"))
            .autoscalingSettings(AutoscalingSettingsArgs.builder()
                .algorithm("AUTOSCALING_ALGORITHM_UNKNOWN")
                .maxNumWorkers(0)
                .build())
            .diskSizeGb(0)
            .defaultPackageSet("DEFAULT_PACKAGE_SET_UNKNOWN")
            .diskSourceImage("string")
            .packages(PackageArgs.builder()
                .location("string")
                .name("string")
                .build())
            .poolArgs(Map.of("string", "string"))
            .sdkHarnessContainerImages(SdkHarnessContainerImageArgs.builder()
                .capabilities("string")
                .containerImage("string")
                .environmentId("string")
                .useSingleCorePerContainer(false)
                .build())
            .subnetwork("string")
            .taskrunnerSettings(TaskRunnerSettingsArgs.builder()
                .alsologtostderr(false)
                .baseTaskDir("string")
                .baseUrl("string")
                .commandlinesFileName("string")
                .continueOnException(false)
                .dataflowApiVersion("string")
                .harnessCommand("string")
                .languageHint("string")
                .logDir("string")
                .logToSerialconsole(false)
                .logUploadLocation("string")
                .oauthScopes("string")
                .parallelWorkerSettings(WorkerSettingsArgs.builder()
                    .baseUrl("string")
                    .reportingEnabled(false)
                    .servicePath("string")
                    .shuffleServicePath("string")
                    .tempStoragePrefix("string")
                    .workerId("string")
                    .build())
                .streamingWorkerMainClass("string")
                .taskGroup("string")
                .taskUser("string")
                .tempStoragePrefix("string")
                .vmId("string")
                .workflowFileName("string")
                .build())
            .teardownPolicy("TEARDOWN_POLICY_UNKNOWN")
            .dataDisks(DiskArgs.builder()
                .diskType("string")
                .mountPoint("string")
                .sizeGb(0)
                .build())
            .zone("string")
            .build())
        .workerRegion("string")
        .workerZone("string")
        .build())
    .id("string")
    .jobMetadata(JobMetadataArgs.builder()
        .bigTableDetails(BigTableIODetailsArgs.builder()
            .instanceId("string")
            .project("string")
            .tableId("string")
            .build())
        .bigqueryDetails(BigQueryIODetailsArgs.builder()
            .dataset("string")
            .project("string")
            .query("string")
            .table("string")
            .build())
        .datastoreDetails(DatastoreIODetailsArgs.builder()
            .namespace("string")
            .project("string")
            .build())
        .fileDetails(FileIODetailsArgs.builder()
            .filePattern("string")
            .build())
        .pubsubDetails(PubSubIODetailsArgs.builder()
            .subscription("string")
            .topic("string")
            .build())
        .sdkVersion(SdkVersionArgs.builder()
            .sdkSupportStatus("UNKNOWN")
            .version("string")
            .versionDisplayName("string")
            .build())
        .spannerDetails(SpannerIODetailsArgs.builder()
            .databaseId("string")
            .instanceId("string")
            .project("string")
            .build())
        .userDisplayProperties(Map.of("string", "string"))
        .build())
    .labels(Map.of("string", "string"))
    .location("string")
    .name("string")
    .pipelineDescription(PipelineDescriptionArgs.builder()
        .displayData(DisplayDataArgs.builder()
            .boolValue(false)
            .durationValue("string")
            .floatValue(0)
            .int64Value("string")
            .javaClassValue("string")
            .key("string")
            .label("string")
            .namespace("string")
            .shortStrValue("string")
            .strValue("string")
            .timestampValue("string")
            .url("string")
            .build())
        .executionPipelineStage(ExecutionStageSummaryArgs.builder()
            .componentSource(ComponentSourceArgs.builder()
                .name("string")
                .originalTransformOrCollection("string")
                .userName("string")
                .build())
            .componentTransform(ComponentTransformArgs.builder()
                .name("string")
                .originalTransform("string")
                .userName("string")
                .build())
            .id("string")
            .inputSource(StageSourceArgs.builder()
                .name("string")
                .originalTransformOrCollection("string")
                .sizeBytes("string")
                .userName("string")
                .build())
            .kind("UNKNOWN_KIND")
            .name("string")
            .outputSource(StageSourceArgs.builder()
                .name("string")
                .originalTransformOrCollection("string")
                .sizeBytes("string")
                .userName("string")
                .build())
            .prerequisiteStage("string")
            .build())
        .originalPipelineTransform(TransformSummaryArgs.builder()
            .displayData(DisplayDataArgs.builder()
                .boolValue(false)
                .durationValue("string")
                .floatValue(0)
                .int64Value("string")
                .javaClassValue("string")
                .key("string")
                .label("string")
                .namespace("string")
                .shortStrValue("string")
                .strValue("string")
                .timestampValue("string")
                .url("string")
                .build())
            .id("string")
            .inputCollectionName("string")
            .kind("UNKNOWN_KIND")
            .name("string")
            .outputCollectionName("string")
            .build())
        .stepNamesHash("string")
        .build())
    .project("string")
    .replaceJobId("string")
    .replacedByJobId("string")
    .requestedState("JOB_STATE_UNKNOWN")
    .runtimeUpdatableParams(RuntimeUpdatableParamsArgs.builder()
        .maxNumWorkers(0)
        .minNumWorkers(0)
        .build())
    .satisfiesPzs(false)
    .stageStates(ExecutionStageStateArgs.builder()
        .currentStateTime("string")
        .executionStageName("string")
        .executionStageState("JOB_STATE_UNKNOWN")
        .build())
    .startTime("string")
    .steps(StepArgs.builder()
        .kind("string")
        .name("string")
        .properties(Map.of("string", "string"))
        .build())
    .stepsLocation("string")
    .tempFiles("string")
    .transformNameMapping(Map.of("string", "string"))
    .type("JOB_TYPE_UNKNOWN")
    .view("string")
    .build());
examplejob_resource_resource_from_dataflowv1b3 = google_native.dataflow.v1b3.Job("examplejobResourceResourceFromDataflowv1b3",
    client_request_id="string",
    create_time="string",
    created_from_snapshot_id="string",
    current_state=google_native.dataflow.v1b3.JobCurrentState.JOB_STATE_UNKNOWN,
    current_state_time="string",
    environment={
        "cluster_manager_api_service": "string",
        "dataset": "string",
        "debug_options": {
            "data_sampling": {
                "behaviors": [google_native.dataflow.v1b3.DataSamplingConfigBehaviorsItem.DATA_SAMPLING_BEHAVIOR_UNSPECIFIED],
            },
            "enable_hot_key_logging": False,
        },
        "experiments": ["string"],
        "flex_resource_scheduling_goal": google_native.dataflow.v1b3.EnvironmentFlexResourceSchedulingGoal.FLEXRS_UNSPECIFIED,
        "internal_experiments": {
            "string": "string",
        },
        "sdk_pipeline_options": {
            "string": "string",
        },
        "service_account_email": "string",
        "service_kms_key_name": "string",
        "service_options": ["string"],
        "temp_storage_prefix": "string",
        "user_agent": {
            "string": "string",
        },
        "version": {
            "string": "string",
        },
        "worker_pools": [{
            "network": "string",
            "disk_type": "string",
            "num_threads_per_worker": 0,
            "on_host_maintenance": "string",
            "num_workers": 0,
            "ip_configuration": google_native.dataflow.v1b3.WorkerPoolIpConfiguration.WORKER_IP_UNSPECIFIED,
            "kind": "string",
            "machine_type": "string",
            "metadata": {
                "string": "string",
            },
            "autoscaling_settings": {
                "algorithm": google_native.dataflow.v1b3.AutoscalingSettingsAlgorithm.AUTOSCALING_ALGORITHM_UNKNOWN,
                "max_num_workers": 0,
            },
            "disk_size_gb": 0,
            "default_package_set": google_native.dataflow.v1b3.WorkerPoolDefaultPackageSet.DEFAULT_PACKAGE_SET_UNKNOWN,
            "disk_source_image": "string",
            "packages": [{
                "location": "string",
                "name": "string",
            }],
            "pool_args": {
                "string": "string",
            },
            "sdk_harness_container_images": [{
                "capabilities": ["string"],
                "container_image": "string",
                "environment_id": "string",
                "use_single_core_per_container": False,
            }],
            "subnetwork": "string",
            "taskrunner_settings": {
                "alsologtostderr": False,
                "base_task_dir": "string",
                "base_url": "string",
                "commandlines_file_name": "string",
                "continue_on_exception": False,
                "dataflow_api_version": "string",
                "harness_command": "string",
                "language_hint": "string",
                "log_dir": "string",
                "log_to_serialconsole": False,
                "log_upload_location": "string",
                "oauth_scopes": ["string"],
                "parallel_worker_settings": {
                    "base_url": "string",
                    "reporting_enabled": False,
                    "service_path": "string",
                    "shuffle_service_path": "string",
                    "temp_storage_prefix": "string",
                    "worker_id": "string",
                },
                "streaming_worker_main_class": "string",
                "task_group": "string",
                "task_user": "string",
                "temp_storage_prefix": "string",
                "vm_id": "string",
                "workflow_file_name": "string",
            },
            "teardown_policy": google_native.dataflow.v1b3.WorkerPoolTeardownPolicy.TEARDOWN_POLICY_UNKNOWN,
            "data_disks": [{
                "disk_type": "string",
                "mount_point": "string",
                "size_gb": 0,
            }],
            "zone": "string",
        }],
        "worker_region": "string",
        "worker_zone": "string",
    },
    id="string",
    job_metadata={
        "big_table_details": [{
            "instance_id": "string",
            "project": "string",
            "table_id": "string",
        }],
        "bigquery_details": [{
            "dataset": "string",
            "project": "string",
            "query": "string",
            "table": "string",
        }],
        "datastore_details": [{
            "namespace": "string",
            "project": "string",
        }],
        "file_details": [{
            "file_pattern": "string",
        }],
        "pubsub_details": [{
            "subscription": "string",
            "topic": "string",
        }],
        "sdk_version": {
            "sdk_support_status": google_native.dataflow.v1b3.SdkVersionSdkSupportStatus.UNKNOWN,
            "version": "string",
            "version_display_name": "string",
        },
        "spanner_details": [{
            "database_id": "string",
            "instance_id": "string",
            "project": "string",
        }],
        "user_display_properties": {
            "string": "string",
        },
    },
    labels={
        "string": "string",
    },
    location="string",
    name="string",
    pipeline_description={
        "display_data": [{
            "bool_value": False,
            "duration_value": "string",
            "float_value": 0,
            "int64_value": "string",
            "java_class_value": "string",
            "key": "string",
            "label": "string",
            "namespace": "string",
            "short_str_value": "string",
            "str_value": "string",
            "timestamp_value": "string",
            "url": "string",
        }],
        "execution_pipeline_stage": [{
            "component_source": [{
                "name": "string",
                "original_transform_or_collection": "string",
                "user_name": "string",
            }],
            "component_transform": [{
                "name": "string",
                "original_transform": "string",
                "user_name": "string",
            }],
            "id": "string",
            "input_source": [{
                "name": "string",
                "original_transform_or_collection": "string",
                "size_bytes": "string",
                "user_name": "string",
            }],
            "kind": google_native.dataflow.v1b3.ExecutionStageSummaryKind.UNKNOWN_KIND,
            "name": "string",
            "output_source": [{
                "name": "string",
                "original_transform_or_collection": "string",
                "size_bytes": "string",
                "user_name": "string",
            }],
            "prerequisite_stage": ["string"],
        }],
        "original_pipeline_transform": [{
            "display_data": [{
                "bool_value": False,
                "duration_value": "string",
                "float_value": 0,
                "int64_value": "string",
                "java_class_value": "string",
                "key": "string",
                "label": "string",
                "namespace": "string",
                "short_str_value": "string",
                "str_value": "string",
                "timestamp_value": "string",
                "url": "string",
            }],
            "id": "string",
            "input_collection_name": ["string"],
            "kind": google_native.dataflow.v1b3.TransformSummaryKind.UNKNOWN_KIND,
            "name": "string",
            "output_collection_name": ["string"],
        }],
        "step_names_hash": "string",
    },
    project="string",
    replace_job_id="string",
    replaced_by_job_id="string",
    requested_state=google_native.dataflow.v1b3.JobRequestedState.JOB_STATE_UNKNOWN,
    runtime_updatable_params={
        "max_num_workers": 0,
        "min_num_workers": 0,
    },
    satisfies_pzs=False,
    stage_states=[{
        "current_state_time": "string",
        "execution_stage_name": "string",
        "execution_stage_state": google_native.dataflow.v1b3.ExecutionStageStateExecutionStageState.JOB_STATE_UNKNOWN,
    }],
    start_time="string",
    steps=[{
        "kind": "string",
        "name": "string",
        "properties": {
            "string": "string",
        },
    }],
    steps_location="string",
    temp_files=["string"],
    transform_name_mapping={
        "string": "string",
    },
    type=google_native.dataflow.v1b3.JobType.JOB_TYPE_UNKNOWN,
    view="string")
const examplejobResourceResourceFromDataflowv1b3 = new google_native.dataflow.v1b3.Job("examplejobResourceResourceFromDataflowv1b3", {
    clientRequestId: "string",
    createTime: "string",
    createdFromSnapshotId: "string",
    currentState: google_native.dataflow.v1b3.JobCurrentState.JobStateUnknown,
    currentStateTime: "string",
    environment: {
        clusterManagerApiService: "string",
        dataset: "string",
        debugOptions: {
            dataSampling: {
                behaviors: [google_native.dataflow.v1b3.DataSamplingConfigBehaviorsItem.DataSamplingBehaviorUnspecified],
            },
            enableHotKeyLogging: false,
        },
        experiments: ["string"],
        flexResourceSchedulingGoal: google_native.dataflow.v1b3.EnvironmentFlexResourceSchedulingGoal.FlexrsUnspecified,
        internalExperiments: {
            string: "string",
        },
        sdkPipelineOptions: {
            string: "string",
        },
        serviceAccountEmail: "string",
        serviceKmsKeyName: "string",
        serviceOptions: ["string"],
        tempStoragePrefix: "string",
        userAgent: {
            string: "string",
        },
        version: {
            string: "string",
        },
        workerPools: [{
            network: "string",
            diskType: "string",
            numThreadsPerWorker: 0,
            onHostMaintenance: "string",
            numWorkers: 0,
            ipConfiguration: google_native.dataflow.v1b3.WorkerPoolIpConfiguration.WorkerIpUnspecified,
            kind: "string",
            machineType: "string",
            metadata: {
                string: "string",
            },
            autoscalingSettings: {
                algorithm: google_native.dataflow.v1b3.AutoscalingSettingsAlgorithm.AutoscalingAlgorithmUnknown,
                maxNumWorkers: 0,
            },
            diskSizeGb: 0,
            defaultPackageSet: google_native.dataflow.v1b3.WorkerPoolDefaultPackageSet.DefaultPackageSetUnknown,
            diskSourceImage: "string",
            packages: [{
                location: "string",
                name: "string",
            }],
            poolArgs: {
                string: "string",
            },
            sdkHarnessContainerImages: [{
                capabilities: ["string"],
                containerImage: "string",
                environmentId: "string",
                useSingleCorePerContainer: false,
            }],
            subnetwork: "string",
            taskrunnerSettings: {
                alsologtostderr: false,
                baseTaskDir: "string",
                baseUrl: "string",
                commandlinesFileName: "string",
                continueOnException: false,
                dataflowApiVersion: "string",
                harnessCommand: "string",
                languageHint: "string",
                logDir: "string",
                logToSerialconsole: false,
                logUploadLocation: "string",
                oauthScopes: ["string"],
                parallelWorkerSettings: {
                    baseUrl: "string",
                    reportingEnabled: false,
                    servicePath: "string",
                    shuffleServicePath: "string",
                    tempStoragePrefix: "string",
                    workerId: "string",
                },
                streamingWorkerMainClass: "string",
                taskGroup: "string",
                taskUser: "string",
                tempStoragePrefix: "string",
                vmId: "string",
                workflowFileName: "string",
            },
            teardownPolicy: google_native.dataflow.v1b3.WorkerPoolTeardownPolicy.TeardownPolicyUnknown,
            dataDisks: [{
                diskType: "string",
                mountPoint: "string",
                sizeGb: 0,
            }],
            zone: "string",
        }],
        workerRegion: "string",
        workerZone: "string",
    },
    id: "string",
    jobMetadata: {
        bigTableDetails: [{
            instanceId: "string",
            project: "string",
            tableId: "string",
        }],
        bigqueryDetails: [{
            dataset: "string",
            project: "string",
            query: "string",
            table: "string",
        }],
        datastoreDetails: [{
            namespace: "string",
            project: "string",
        }],
        fileDetails: [{
            filePattern: "string",
        }],
        pubsubDetails: [{
            subscription: "string",
            topic: "string",
        }],
        sdkVersion: {
            sdkSupportStatus: google_native.dataflow.v1b3.SdkVersionSdkSupportStatus.Unknown,
            version: "string",
            versionDisplayName: "string",
        },
        spannerDetails: [{
            databaseId: "string",
            instanceId: "string",
            project: "string",
        }],
        userDisplayProperties: {
            string: "string",
        },
    },
    labels: {
        string: "string",
    },
    location: "string",
    name: "string",
    pipelineDescription: {
        displayData: [{
            boolValue: false,
            durationValue: "string",
            floatValue: 0,
            int64Value: "string",
            javaClassValue: "string",
            key: "string",
            label: "string",
            namespace: "string",
            shortStrValue: "string",
            strValue: "string",
            timestampValue: "string",
            url: "string",
        }],
        executionPipelineStage: [{
            componentSource: [{
                name: "string",
                originalTransformOrCollection: "string",
                userName: "string",
            }],
            componentTransform: [{
                name: "string",
                originalTransform: "string",
                userName: "string",
            }],
            id: "string",
            inputSource: [{
                name: "string",
                originalTransformOrCollection: "string",
                sizeBytes: "string",
                userName: "string",
            }],
            kind: google_native.dataflow.v1b3.ExecutionStageSummaryKind.UnknownKind,
            name: "string",
            outputSource: [{
                name: "string",
                originalTransformOrCollection: "string",
                sizeBytes: "string",
                userName: "string",
            }],
            prerequisiteStage: ["string"],
        }],
        originalPipelineTransform: [{
            displayData: [{
                boolValue: false,
                durationValue: "string",
                floatValue: 0,
                int64Value: "string",
                javaClassValue: "string",
                key: "string",
                label: "string",
                namespace: "string",
                shortStrValue: "string",
                strValue: "string",
                timestampValue: "string",
                url: "string",
            }],
            id: "string",
            inputCollectionName: ["string"],
            kind: google_native.dataflow.v1b3.TransformSummaryKind.UnknownKind,
            name: "string",
            outputCollectionName: ["string"],
        }],
        stepNamesHash: "string",
    },
    project: "string",
    replaceJobId: "string",
    replacedByJobId: "string",
    requestedState: google_native.dataflow.v1b3.JobRequestedState.JobStateUnknown,
    runtimeUpdatableParams: {
        maxNumWorkers: 0,
        minNumWorkers: 0,
    },
    satisfiesPzs: false,
    stageStates: [{
        currentStateTime: "string",
        executionStageName: "string",
        executionStageState: google_native.dataflow.v1b3.ExecutionStageStateExecutionStageState.JobStateUnknown,
    }],
    startTime: "string",
    steps: [{
        kind: "string",
        name: "string",
        properties: {
            string: "string",
        },
    }],
    stepsLocation: "string",
    tempFiles: ["string"],
    transformNameMapping: {
        string: "string",
    },
    type: google_native.dataflow.v1b3.JobType.JobTypeUnknown,
    view: "string",
});
type: google-native:dataflow/v1b3:Job
properties:
    clientRequestId: string
    createTime: string
    createdFromSnapshotId: string
    currentState: JOB_STATE_UNKNOWN
    currentStateTime: string
    environment:
        clusterManagerApiService: string
        dataset: string
        debugOptions:
            dataSampling:
                behaviors:
                    - DATA_SAMPLING_BEHAVIOR_UNSPECIFIED
            enableHotKeyLogging: false
        experiments:
            - string
        flexResourceSchedulingGoal: FLEXRS_UNSPECIFIED
        internalExperiments:
            string: string
        sdkPipelineOptions:
            string: string
        serviceAccountEmail: string
        serviceKmsKeyName: string
        serviceOptions:
            - string
        tempStoragePrefix: string
        userAgent:
            string: string
        version:
            string: string
        workerPools:
            - autoscalingSettings:
                algorithm: AUTOSCALING_ALGORITHM_UNKNOWN
                maxNumWorkers: 0
              dataDisks:
                - diskType: string
                  mountPoint: string
                  sizeGb: 0
              defaultPackageSet: DEFAULT_PACKAGE_SET_UNKNOWN
              diskSizeGb: 0
              diskSourceImage: string
              diskType: string
              ipConfiguration: WORKER_IP_UNSPECIFIED
              kind: string
              machineType: string
              metadata:
                string: string
              network: string
              numThreadsPerWorker: 0
              numWorkers: 0
              onHostMaintenance: string
              packages:
                - location: string
                  name: string
              poolArgs:
                string: string
              sdkHarnessContainerImages:
                - capabilities:
                    - string
                  containerImage: string
                  environmentId: string
                  useSingleCorePerContainer: false
              subnetwork: string
              taskrunnerSettings:
                alsologtostderr: false
                baseTaskDir: string
                baseUrl: string
                commandlinesFileName: string
                continueOnException: false
                dataflowApiVersion: string
                harnessCommand: string
                languageHint: string
                logDir: string
                logToSerialconsole: false
                logUploadLocation: string
                oauthScopes:
                    - string
                parallelWorkerSettings:
                    baseUrl: string
                    reportingEnabled: false
                    servicePath: string
                    shuffleServicePath: string
                    tempStoragePrefix: string
                    workerId: string
                streamingWorkerMainClass: string
                taskGroup: string
                taskUser: string
                tempStoragePrefix: string
                vmId: string
                workflowFileName: string
              teardownPolicy: TEARDOWN_POLICY_UNKNOWN
              zone: string
        workerRegion: string
        workerZone: string
    id: string
    jobMetadata:
        bigTableDetails:
            - instanceId: string
              project: string
              tableId: string
        bigqueryDetails:
            - dataset: string
              project: string
              query: string
              table: string
        datastoreDetails:
            - namespace: string
              project: string
        fileDetails:
            - filePattern: string
        pubsubDetails:
            - subscription: string
              topic: string
        sdkVersion:
            sdkSupportStatus: UNKNOWN
            version: string
            versionDisplayName: string
        spannerDetails:
            - databaseId: string
              instanceId: string
              project: string
        userDisplayProperties:
            string: string
    labels:
        string: string
    location: string
    name: string
    pipelineDescription:
        displayData:
            - boolValue: false
              durationValue: string
              floatValue: 0
              int64Value: string
              javaClassValue: string
              key: string
              label: string
              namespace: string
              shortStrValue: string
              strValue: string
              timestampValue: string
              url: string
        executionPipelineStage:
            - componentSource:
                - name: string
                  originalTransformOrCollection: string
                  userName: string
              componentTransform:
                - name: string
                  originalTransform: string
                  userName: string
              id: string
              inputSource:
                - name: string
                  originalTransformOrCollection: string
                  sizeBytes: string
                  userName: string
              kind: UNKNOWN_KIND
              name: string
              outputSource:
                - name: string
                  originalTransformOrCollection: string
                  sizeBytes: string
                  userName: string
              prerequisiteStage:
                - string
        originalPipelineTransform:
            - displayData:
                - boolValue: false
                  durationValue: string
                  floatValue: 0
                  int64Value: string
                  javaClassValue: string
                  key: string
                  label: string
                  namespace: string
                  shortStrValue: string
                  strValue: string
                  timestampValue: string
                  url: string
              id: string
              inputCollectionName:
                - string
              kind: UNKNOWN_KIND
              name: string
              outputCollectionName:
                - string
        stepNamesHash: string
    project: string
    replaceJobId: string
    replacedByJobId: string
    requestedState: JOB_STATE_UNKNOWN
    runtimeUpdatableParams:
        maxNumWorkers: 0
        minNumWorkers: 0
    satisfiesPzs: false
    stageStates:
        - currentStateTime: string
          executionStageName: string
          executionStageState: JOB_STATE_UNKNOWN
    startTime: string
    steps:
        - kind: string
          name: string
          properties:
            string: string
    stepsLocation: string
    tempFiles:
        - string
    transformNameMapping:
        string: string
    type: JOB_TYPE_UNKNOWN
    view: string
Job Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Job resource accepts the following input properties:
- ClientRequest stringId 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- CreateTime string
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- CreatedFrom stringSnapshot Id 
- If this is specified, the job's initial state is populated from the given snapshot.
- CurrentState Pulumi.Google Native. Dataflow. V1b3. Job Current State 
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- CurrentState stringTime 
- The timestamp associated with the current state.
- Environment
Pulumi.Google Native. Dataflow. V1b3. Inputs. Environment 
- The environment for the job.
- ExecutionInfo Pulumi.Google Native. Dataflow. V1b3. Inputs. Job Execution Info 
- Deprecated.
- Id string
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- JobMetadata Pulumi.Google Native. Dataflow. V1b3. Inputs. Job Metadata 
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- Labels Dictionary<string, string>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- PipelineDescription Pulumi.Google Native. Dataflow. V1b3. Inputs. Pipeline Description 
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ReplaceJob stringId 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- ReplacedBy stringJob Id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- RequestedState Pulumi.Google Native. Dataflow. V1b3. Job Requested State 
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- RuntimeUpdatable Pulumi.Params Google Native. Dataflow. V1b3. Inputs. Runtime Updatable Params 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- SatisfiesPzs bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- StageStates List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Execution Stage State> 
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- StartTime string
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- Steps
List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Step> 
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- StepsLocation string
- The Cloud Storage location where the steps are stored.
- TempFiles List<string>
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- TransformName Dictionary<string, string>Mapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- Type
Pulumi.Google Native. Dataflow. V1b3. Job Type 
- The type of Cloud Dataflow job.
- View string
- The level of information requested in response.
- ClientRequest stringId 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- CreateTime string
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- CreatedFrom stringSnapshot Id 
- If this is specified, the job's initial state is populated from the given snapshot.
- CurrentState JobCurrent State 
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- CurrentState stringTime 
- The timestamp associated with the current state.
- Environment
EnvironmentArgs 
- The environment for the job.
- ExecutionInfo JobExecution Info Args 
- Deprecated.
- Id string
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- JobMetadata JobMetadata Args 
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- Labels map[string]string
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- PipelineDescription PipelineDescription Args 
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- ReplaceJob stringId 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- ReplacedBy stringJob Id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- RequestedState JobRequested State 
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- RuntimeUpdatable RuntimeParams Updatable Params Args 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- SatisfiesPzs bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- StageStates []ExecutionStage State Args 
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- StartTime string
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- Steps
[]StepArgs 
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- StepsLocation string
- The Cloud Storage location where the steps are stored.
- TempFiles []string
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- TransformName map[string]stringMapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- Type
JobType 
- The type of Cloud Dataflow job.
- View string
- The level of information requested in response.
- clientRequest StringId 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- createTime String
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- createdFrom StringSnapshot Id 
- If this is specified, the job's initial state is populated from the given snapshot.
- currentState JobCurrent State 
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- currentState StringTime 
- The timestamp associated with the current state.
- environment Environment
- The environment for the job.
- executionInfo JobExecution Info 
- Deprecated.
- id String
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- jobMetadata JobMetadata 
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Map<String,String>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name String
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipelineDescription PipelineDescription 
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- replaceJob StringId 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- replacedBy StringJob Id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- requestedState JobRequested State 
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- runtimeUpdatable RuntimeParams Updatable Params 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfiesPzs Boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stageStates List<ExecutionStage State> 
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- startTime String
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps List<Step>
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- stepsLocation String
- The Cloud Storage location where the steps are stored.
- tempFiles List<String>
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transformName Map<String,String>Mapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type
JobType 
- The type of Cloud Dataflow job.
- view String
- The level of information requested in response.
- clientRequest stringId 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- createTime string
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- createdFrom stringSnapshot Id 
- If this is specified, the job's initial state is populated from the given snapshot.
- currentState JobCurrent State 
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- currentState stringTime 
- The timestamp associated with the current state.
- environment Environment
- The environment for the job.
- executionInfo JobExecution Info 
- Deprecated.
- id string
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- jobMetadata JobMetadata 
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels {[key: string]: string}
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipelineDescription PipelineDescription 
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- replaceJob stringId 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- replacedBy stringJob Id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- requestedState JobRequested State 
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- runtimeUpdatable RuntimeParams Updatable Params 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfiesPzs boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stageStates ExecutionStage State[] 
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- startTime string
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps Step[]
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- stepsLocation string
- The Cloud Storage location where the steps are stored.
- tempFiles string[]
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transformName {[key: string]: string}Mapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type
JobType 
- The type of Cloud Dataflow job.
- view string
- The level of information requested in response.
- client_request_ strid 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- create_time str
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- created_from_ strsnapshot_ id 
- If this is specified, the job's initial state is populated from the given snapshot.
- current_state JobCurrent State 
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- current_state_ strtime 
- The timestamp associated with the current state.
- environment
EnvironmentArgs 
- The environment for the job.
- execution_info JobExecution Info Args 
- Deprecated.
- id str
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- job_metadata JobMetadata Args 
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Mapping[str, str]
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name str
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipeline_description PipelineDescription Args 
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- replace_job_ strid 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- replaced_by_ strjob_ id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- requested_state JobRequested State 
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- runtime_updatable_ Runtimeparams Updatable Params Args 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfies_pzs bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stage_states Sequence[ExecutionStage State Args] 
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- start_time str
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps
Sequence[StepArgs] 
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- steps_location str
- The Cloud Storage location where the steps are stored.
- temp_files Sequence[str]
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transform_name_ Mapping[str, str]mapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type
JobType 
- The type of Cloud Dataflow job.
- view str
- The level of information requested in response.
- clientRequest StringId 
- The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- createTime String
- The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- createdFrom StringSnapshot Id 
- If this is specified, the job's initial state is populated from the given snapshot.
- currentState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"
- The current state of the job. Jobs are created in the JOB_STATE_STOPPEDstate unless otherwise specified. A job in theJOB_STATE_RUNNINGstate may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- currentState StringTime 
- The timestamp associated with the current state.
- environment Property Map
- The environment for the job.
- executionInfo Property Map
- Deprecated.
- id String
- The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
- jobMetadata Property Map
- This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Map<String>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name String
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipelineDescription Property Map
- Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- replaceJob StringId 
- If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
- replacedBy StringJob Id 
- If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
- requestedState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"
- The job's requested state. Applies to UpdateJobrequests. Setrequested_statewithUpdateJobrequests to switch between the statesJOB_STATE_STOPPEDandJOB_STATE_RUNNING. You can also useUpdateJobrequests to change a job's state fromJOB_STATE_RUNNINGtoJOB_STATE_CANCELLED,JOB_STATE_DONE, orJOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJobrequests.
- runtimeUpdatable Property MapParams 
- This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfiesPzs Boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stageStates List<Property Map>
- This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- startTime String
- The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps List<Property Map>
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- stepsLocation String
- The Cloud Storage location where the steps are stored.
- tempFiles List<String>
- A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transformName Map<String>Mapping 
- The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type "JOB_TYPE_UNKNOWN" | "JOB_TYPE_BATCH" | "JOB_TYPE_STREAMING"
- The type of Cloud Dataflow job.
- view String
- The level of information requested in response.
Outputs
All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- SatisfiesPzi bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Id string
- The provider-assigned unique ID for this managed resource.
- SatisfiesPzi bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- id String
- The provider-assigned unique ID for this managed resource.
- satisfiesPzi Boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- id string
- The provider-assigned unique ID for this managed resource.
- satisfiesPzi boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- id str
- The provider-assigned unique ID for this managed resource.
- satisfies_pzi bool
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- id String
- The provider-assigned unique ID for this managed resource.
- satisfiesPzi Boolean
- Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
Supporting Types
AutoscalingSettings, AutoscalingSettingsArgs    
- Algorithm
Pulumi.Google Native. Dataflow. V1b3. Autoscaling Settings Algorithm 
- The algorithm to use for autoscaling.
- MaxNum intWorkers 
- The maximum number of workers to cap scaling at.
- Algorithm
AutoscalingSettings Algorithm 
- The algorithm to use for autoscaling.
- MaxNum intWorkers 
- The maximum number of workers to cap scaling at.
- algorithm
AutoscalingSettings Algorithm 
- The algorithm to use for autoscaling.
- maxNum IntegerWorkers 
- The maximum number of workers to cap scaling at.
- algorithm
AutoscalingSettings Algorithm 
- The algorithm to use for autoscaling.
- maxNum numberWorkers 
- The maximum number of workers to cap scaling at.
- algorithm
AutoscalingSettings Algorithm 
- The algorithm to use for autoscaling.
- max_num_ intworkers 
- The maximum number of workers to cap scaling at.
- algorithm "AUTOSCALING_ALGORITHM_UNKNOWN" | "AUTOSCALING_ALGORITHM_NONE" | "AUTOSCALING_ALGORITHM_BASIC"
- The algorithm to use for autoscaling.
- maxNum NumberWorkers 
- The maximum number of workers to cap scaling at.
AutoscalingSettingsAlgorithm, AutoscalingSettingsAlgorithmArgs      
- AutoscalingAlgorithm Unknown 
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- AutoscalingAlgorithm None 
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- AutoscalingAlgorithm Basic 
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
- AutoscalingSettings Algorithm Autoscaling Algorithm Unknown 
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- AutoscalingSettings Algorithm Autoscaling Algorithm None 
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- AutoscalingSettings Algorithm Autoscaling Algorithm Basic 
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
- AutoscalingAlgorithm Unknown 
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- AutoscalingAlgorithm None 
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- AutoscalingAlgorithm Basic 
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
- AutoscalingAlgorithm Unknown 
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- AutoscalingAlgorithm None 
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- AutoscalingAlgorithm Basic 
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
- AUTOSCALING_ALGORITHM_UNKNOWN
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- AUTOSCALING_ALGORITHM_NONE
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- AUTOSCALING_ALGORITHM_BASIC
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
- "AUTOSCALING_ALGORITHM_UNKNOWN"
- AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
- "AUTOSCALING_ALGORITHM_NONE"
- AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
- "AUTOSCALING_ALGORITHM_BASIC"
- AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
AutoscalingSettingsResponse, AutoscalingSettingsResponseArgs      
- Algorithm string
- The algorithm to use for autoscaling.
- MaxNum intWorkers 
- The maximum number of workers to cap scaling at.
- Algorithm string
- The algorithm to use for autoscaling.
- MaxNum intWorkers 
- The maximum number of workers to cap scaling at.
- algorithm String
- The algorithm to use for autoscaling.
- maxNum IntegerWorkers 
- The maximum number of workers to cap scaling at.
- algorithm string
- The algorithm to use for autoscaling.
- maxNum numberWorkers 
- The maximum number of workers to cap scaling at.
- algorithm str
- The algorithm to use for autoscaling.
- max_num_ intworkers 
- The maximum number of workers to cap scaling at.
- algorithm String
- The algorithm to use for autoscaling.
- maxNum NumberWorkers 
- The maximum number of workers to cap scaling at.
BigQueryIODetails, BigQueryIODetailsArgs      
BigQueryIODetailsResponse, BigQueryIODetailsResponseArgs        
BigTableIODetails, BigTableIODetailsArgs      
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- TableId string
- TableId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- TableId string
- TableId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- tableId String
- TableId accessed in the connection.
- instanceId string
- InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- tableId string
- TableId accessed in the connection.
- instance_id str
- InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- table_id str
- TableId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- tableId String
- TableId accessed in the connection.
BigTableIODetailsResponse, BigTableIODetailsResponseArgs        
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- TableId string
- TableId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- TableId string
- TableId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- tableId String
- TableId accessed in the connection.
- instanceId string
- InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- tableId string
- TableId accessed in the connection.
- instance_id str
- InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- table_id str
- TableId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- tableId String
- TableId accessed in the connection.
ComponentSource, ComponentSourceArgs    
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName string
- Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform_ stror_ collection 
- User name for the original user transform or collection with which this source is most closely associated.
- user_name str
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
ComponentSourceResponse, ComponentSourceResponseArgs      
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName string
- Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform_ stror_ collection 
- User name for the original user transform or collection with which this source is most closely associated.
- user_name str
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
ComponentTransform, ComponentTransformArgs    
- Name string
- Dataflow service generated name for this source.
- OriginalTransform string
- User name for the original user transform with which this transform is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform string
- User name for the original user transform with which this transform is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform String
- User name for the original user transform with which this transform is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform string
- User name for the original user transform with which this transform is most closely associated.
- userName string
- Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform str
- User name for the original user transform with which this transform is most closely associated.
- user_name str
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform String
- User name for the original user transform with which this transform is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
ComponentTransformResponse, ComponentTransformResponseArgs      
- Name string
- Dataflow service generated name for this source.
- OriginalTransform string
- User name for the original user transform with which this transform is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform string
- User name for the original user transform with which this transform is most closely associated.
- UserName string
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform String
- User name for the original user transform with which this transform is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform string
- User name for the original user transform with which this transform is most closely associated.
- userName string
- Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform str
- User name for the original user transform with which this transform is most closely associated.
- user_name str
- Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform String
- User name for the original user transform with which this transform is most closely associated.
- userName String
- Human-readable name for this transform; may be user or system generated.
DataSamplingConfig, DataSamplingConfigArgs      
- Behaviors
List<Pulumi.Google Native. Dataflow. V1b3. Data Sampling Config Behaviors Item> 
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- Behaviors
[]DataSampling Config Behaviors Item 
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors
List<DataSampling Config Behaviors Item> 
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors
DataSampling Config Behaviors Item[] 
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors
Sequence[DataSampling Config Behaviors Item] 
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors List<"DATA_SAMPLING_BEHAVIOR_UNSPECIFIED" | "DISABLED" | "ALWAYS_ON" | "EXCEPTIONS">
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
DataSamplingConfigBehaviorsItem, DataSamplingConfigBehaviorsItemArgs          
- DataSampling Behavior Unspecified 
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- Disabled
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- AlwaysOn 
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- Exceptions
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
- DataSampling Config Behaviors Item Data Sampling Behavior Unspecified 
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- DataSampling Config Behaviors Item Disabled 
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- DataSampling Config Behaviors Item Always On 
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- DataSampling Config Behaviors Item Exceptions 
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
- DataSampling Behavior Unspecified 
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- Disabled
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- AlwaysOn 
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- Exceptions
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
- DataSampling Behavior Unspecified 
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- Disabled
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- AlwaysOn 
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- Exceptions
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIED
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- DISABLED
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- ALWAYS_ON
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- EXCEPTIONS
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
- "DATA_SAMPLING_BEHAVIOR_UNSPECIFIED"
- DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
- "DISABLED"
- DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
- "ALWAYS_ON"
- ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
- "EXCEPTIONS"
- EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
DataSamplingConfigResponse, DataSamplingConfigResponseArgs        
- Behaviors List<string>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- Behaviors []string
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors List<String>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors string[]
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors Sequence[str]
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors List<String>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
DatastoreIODetails, DatastoreIODetailsArgs    
DatastoreIODetailsResponse, DatastoreIODetailsResponseArgs      
DebugOptions, DebugOptionsArgs    
- DataSampling Pulumi.Google Native. Dataflow. V1b3. Inputs. Data Sampling Config 
- Configuration options for sampling elements from a running pipeline.
- EnableHot boolKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- DataSampling DataSampling Config 
- Configuration options for sampling elements from a running pipeline.
- EnableHot boolKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling DataSampling Config 
- Configuration options for sampling elements from a running pipeline.
- enableHot BooleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling DataSampling Config 
- Configuration options for sampling elements from a running pipeline.
- enableHot booleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data_sampling DataSampling Config 
- Configuration options for sampling elements from a running pipeline.
- enable_hot_ boolkey_ logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling Property Map
- Configuration options for sampling elements from a running pipeline.
- enableHot BooleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
DebugOptionsResponse, DebugOptionsResponseArgs      
- DataSampling Pulumi.Google Native. Dataflow. V1b3. Inputs. Data Sampling Config Response 
- Configuration options for sampling elements from a running pipeline.
- EnableHot boolKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- DataSampling DataSampling Config Response 
- Configuration options for sampling elements from a running pipeline.
- EnableHot boolKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling DataSampling Config Response 
- Configuration options for sampling elements from a running pipeline.
- enableHot BooleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling DataSampling Config Response 
- Configuration options for sampling elements from a running pipeline.
- enableHot booleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data_sampling DataSampling Config Response 
- Configuration options for sampling elements from a running pipeline.
- enable_hot_ boolkey_ logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
- dataSampling Property Map
- Configuration options for sampling elements from a running pipeline.
- enableHot BooleanKey Logging 
- When true, enables the logging of the literal hot key to the user's Cloud Logging.
Disk, DiskArgs  
- DiskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- MountPoint string
- Directory in a VM where disk is mounted.
- SizeGb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- MountPoint string
- Directory in a VM where disk is mounted.
- SizeGb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType String
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint String
- Directory in a VM where disk is mounted.
- sizeGb Integer
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint string
- Directory in a VM where disk is mounted.
- sizeGb number
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_type str
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount_point str
- Directory in a VM where disk is mounted.
- size_gb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType String
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint String
- Directory in a VM where disk is mounted.
- sizeGb Number
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
DiskResponse, DiskResponseArgs    
- DiskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- MountPoint string
- Directory in a VM where disk is mounted.
- SizeGb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- MountPoint string
- Directory in a VM where disk is mounted.
- SizeGb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType String
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint String
- Directory in a VM where disk is mounted.
- sizeGb Integer
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType string
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint string
- Directory in a VM where disk is mounted.
- sizeGb number
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_type str
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount_point str
- Directory in a VM where disk is mounted.
- size_gb int
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskType String
- Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mountPoint String
- Directory in a VM where disk is mounted.
- sizeGb Number
- Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
DisplayData, DisplayDataArgs    
- BoolValue bool
- Contains value if the data is of a boolean type.
- DurationValue string
- Contains value if the data is of duration type.
- FloatValue double
- Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- JavaClass stringValue 
- Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- ShortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- StrValue string
- Contains value if the data is of string type.
- TimestampValue string
- Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- BoolValue bool
- Contains value if the data is of a boolean type.
- DurationValue string
- Contains value if the data is of duration type.
- FloatValue float64
- Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- JavaClass stringValue 
- Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- ShortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- StrValue string
- Contains value if the data is of string type.
- TimestampValue string
- Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- boolValue Boolean
- Contains value if the data is of a boolean type.
- durationValue String
- Contains value if the data is of duration type.
- floatValue Double
- Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- javaClass StringValue 
- Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr StringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue String
- Contains value if the data is of string type.
- timestampValue String
- Contains value if the data is of timestamp type.
- url String
- An optional full URL.
- boolValue boolean
- Contains value if the data is of a boolean type.
- durationValue string
- Contains value if the data is of duration type.
- floatValue number
- Contains value if the data is of float type.
- int64Value string
- Contains value if the data is of int64 type.
- javaClass stringValue 
- Contains value if the data is of java class type.
- key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label string
- An optional label to display in a dax UI for the element.
- namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue string
- Contains value if the data is of string type.
- timestampValue string
- Contains value if the data is of timestamp type.
- url string
- An optional full URL.
- bool_value bool
- Contains value if the data is of a boolean type.
- duration_value str
- Contains value if the data is of duration type.
- float_value float
- Contains value if the data is of float type.
- int64_value str
- Contains value if the data is of int64 type.
- java_class_ strvalue 
- Contains value if the data is of java class type.
- key str
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label str
- An optional label to display in a dax UI for the element.
- namespace str
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short_str_ strvalue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str_value str
- Contains value if the data is of string type.
- timestamp_value str
- Contains value if the data is of timestamp type.
- url str
- An optional full URL.
- boolValue Boolean
- Contains value if the data is of a boolean type.
- durationValue String
- Contains value if the data is of duration type.
- floatValue Number
- Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- javaClass StringValue 
- Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr StringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue String
- Contains value if the data is of string type.
- timestampValue String
- Contains value if the data is of timestamp type.
- url String
- An optional full URL.
DisplayDataResponse, DisplayDataResponseArgs      
- BoolValue bool
- Contains value if the data is of a boolean type.
- DurationValue string
- Contains value if the data is of duration type.
- FloatValue double
- Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- JavaClass stringValue 
- Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- ShortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- StrValue string
- Contains value if the data is of string type.
- TimestampValue string
- Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- BoolValue bool
- Contains value if the data is of a boolean type.
- DurationValue string
- Contains value if the data is of duration type.
- FloatValue float64
- Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- JavaClass stringValue 
- Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- ShortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- StrValue string
- Contains value if the data is of string type.
- TimestampValue string
- Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- boolValue Boolean
- Contains value if the data is of a boolean type.
- durationValue String
- Contains value if the data is of duration type.
- floatValue Double
- Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- javaClass StringValue 
- Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr StringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue String
- Contains value if the data is of string type.
- timestampValue String
- Contains value if the data is of timestamp type.
- url String
- An optional full URL.
- boolValue boolean
- Contains value if the data is of a boolean type.
- durationValue string
- Contains value if the data is of duration type.
- floatValue number
- Contains value if the data is of float type.
- int64Value string
- Contains value if the data is of int64 type.
- javaClass stringValue 
- Contains value if the data is of java class type.
- key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label string
- An optional label to display in a dax UI for the element.
- namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr stringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue string
- Contains value if the data is of string type.
- timestampValue string
- Contains value if the data is of timestamp type.
- url string
- An optional full URL.
- bool_value bool
- Contains value if the data is of a boolean type.
- duration_value str
- Contains value if the data is of duration type.
- float_value float
- Contains value if the data is of float type.
- int64_value str
- Contains value if the data is of int64 type.
- java_class_ strvalue 
- Contains value if the data is of java class type.
- key str
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label str
- An optional label to display in a dax UI for the element.
- namespace str
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short_str_ strvalue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str_value str
- Contains value if the data is of string type.
- timestamp_value str
- Contains value if the data is of timestamp type.
- url str
- An optional full URL.
- boolValue Boolean
- Contains value if the data is of a boolean type.
- durationValue String
- Contains value if the data is of duration type.
- floatValue Number
- Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- javaClass StringValue 
- Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- shortStr StringValue 
- A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- strValue String
- Contains value if the data is of string type.
- timestampValue String
- Contains value if the data is of timestamp type.
- url String
- An optional full URL.
Environment, EnvironmentArgs  
- ClusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- DebugOptions Pulumi.Google Native. Dataflow. V1b3. Inputs. Debug Options 
- Any debugging options to be supplied to the job.
- Experiments List<string>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- FlexResource Pulumi.Scheduling Goal Google Native. Dataflow. V1b3. Environment Flex Resource Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- InternalExperiments Dictionary<string, string>
- Experimental settings.
- SdkPipeline Dictionary<string, string>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- ServiceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- ServiceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- ServiceOptions List<string>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- UserAgent Dictionary<string, string>
- A description of the process that generated the request.
- Version Dictionary<string, string>
- A structure describing which components and their versions of the service are required in order to run the job.
- WorkerPools List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Worker Pool> 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- ClusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- DebugOptions DebugOptions 
- Any debugging options to be supplied to the job.
- Experiments []string
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- FlexResource EnvironmentScheduling Goal Flex Resource Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- InternalExperiments map[string]string
- Experimental settings.
- SdkPipeline map[string]stringOptions 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- ServiceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- ServiceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- ServiceOptions []string
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- UserAgent map[string]string
- A description of the process that generated the request.
- Version map[string]string
- A structure describing which components and their versions of the service are required in order to run the job.
- WorkerPools []WorkerPool 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager StringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions DebugOptions 
- Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource EnvironmentScheduling Goal Flex Resource Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments Map<String,String>
- Experimental settings.
- sdkPipeline Map<String,String>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount StringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms StringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions List<String>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- userAgent Map<String,String>
- A description of the process that generated the request.
- version Map<String,String>
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools List<WorkerPool> 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions DebugOptions 
- Any debugging options to be supplied to the job.
- experiments string[]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource EnvironmentScheduling Goal Flex Resource Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments {[key: string]: string}
- Experimental settings.
- sdkPipeline {[key: string]: string}Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions string[]
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- tempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- userAgent {[key: string]: string}
- A description of the process that generated the request.
- version {[key: string]: string}
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools WorkerPool[] 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster_manager_ strapi_ service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset str
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug_options DebugOptions 
- Any debugging options to be supplied to the job.
- experiments Sequence[str]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex_resource_ Environmentscheduling_ goal Flex Resource Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internal_experiments Mapping[str, str]
- Experimental settings.
- sdk_pipeline_ Mapping[str, str]options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service_account_ stremail 
- Identity to run virtual machines as. Defaults to the default account.
- service_kms_ strkey_ name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service_options Sequence[str]
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- temp_storage_ strprefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- user_agent Mapping[str, str]
- A description of the process that generated the request.
- version Mapping[str, str]
- A structure describing which components and their versions of the service are required in order to run the job.
- worker_pools Sequence[WorkerPool] 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager StringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions Property Map
- Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED"Scheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments Map<String>
- Experimental settings.
- sdkPipeline Map<String>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount StringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms StringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions List<String>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- userAgent Map<String>
- A description of the process that generated the request.
- version Map<String>
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools List<Property Map>
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
EnvironmentFlexResourceSchedulingGoal, EnvironmentFlexResourceSchedulingGoalArgs          
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- EnvironmentFlex Resource Scheduling Goal Flexrs Unspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- EnvironmentFlex Resource Scheduling Goal Flexrs Speed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- EnvironmentFlex Resource Scheduling Goal Flexrs Cost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FlexrsUnspecified 
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FlexrsSpeed Optimized 
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FlexrsCost Optimized 
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- FLEXRS_UNSPECIFIED
- FLEXRS_UNSPECIFIEDRun in the default mode.
- FLEXRS_SPEED_OPTIMIZED
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- FLEXRS_COST_OPTIMIZED
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
- "FLEXRS_UNSPECIFIED"
- FLEXRS_UNSPECIFIEDRun in the default mode.
- "FLEXRS_SPEED_OPTIMIZED"
- FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
- "FLEXRS_COST_OPTIMIZED"
- FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
EnvironmentResponse, EnvironmentResponseArgs    
- ClusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- DebugOptions Pulumi.Google Native. Dataflow. V1b3. Inputs. Debug Options Response 
- Any debugging options to be supplied to the job.
- Experiments List<string>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- FlexResource stringScheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- InternalExperiments Dictionary<string, string>
- Experimental settings.
- SdkPipeline Dictionary<string, string>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- ServiceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- ServiceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- ServiceOptions List<string>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- ShuffleMode string
- The shuffle mode used for the job.
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- UseStreaming boolEngine Resource Based Billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- UserAgent Dictionary<string, string>
- A description of the process that generated the request.
- Version Dictionary<string, string>
- A structure describing which components and their versions of the service are required in order to run the job.
- WorkerPools List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Worker Pool Response> 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- ClusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- DebugOptions DebugOptions Response 
- Any debugging options to be supplied to the job.
- Experiments []string
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- FlexResource stringScheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- InternalExperiments map[string]string
- Experimental settings.
- SdkPipeline map[string]stringOptions 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- ServiceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- ServiceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- ServiceOptions []string
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- ShuffleMode string
- The shuffle mode used for the job.
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- UseStreaming boolEngine Resource Based Billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- UserAgent map[string]string
- A description of the process that generated the request.
- Version map[string]string
- A structure describing which components and their versions of the service are required in order to run the job.
- WorkerPools []WorkerPool Response 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- WorkerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- WorkerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager StringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions DebugOptions Response 
- Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource StringScheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments Map<String,String>
- Experimental settings.
- sdkPipeline Map<String,String>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount StringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms StringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions List<String>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffleMode String
- The shuffle mode used for the job.
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- useStreaming BooleanEngine Resource Based Billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- userAgent Map<String,String>
- A description of the process that generated the request.
- version Map<String,String>
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools List<WorkerPool Response> 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager stringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions DebugOptions Response 
- Any debugging options to be supplied to the job.
- experiments string[]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource stringScheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments {[key: string]: string}
- Experimental settings.
- sdkPipeline {[key: string]: string}Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount stringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms stringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions string[]
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffleMode string
- The shuffle mode used for the job.
- tempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- useStreaming booleanEngine Resource Based Billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- userAgent {[key: string]: string}
- A description of the process that generated the request.
- version {[key: string]: string}
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools WorkerPool Response[] 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion string
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone string
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster_manager_ strapi_ service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset str
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug_options DebugOptions Response 
- Any debugging options to be supplied to the job.
- experiments Sequence[str]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex_resource_ strscheduling_ goal 
- Which Flexible Resource Scheduling mode to run in.
- internal_experiments Mapping[str, str]
- Experimental settings.
- sdk_pipeline_ Mapping[str, str]options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service_account_ stremail 
- Identity to run virtual machines as. Defaults to the default account.
- service_kms_ strkey_ name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service_options Sequence[str]
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffle_mode str
- The shuffle mode used for the job.
- temp_storage_ strprefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- use_streaming_ boolengine_ resource_ based_ billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- user_agent Mapping[str, str]
- A description of the process that generated the request.
- version Mapping[str, str]
- A structure describing which components and their versions of the service are required in order to run the job.
- worker_pools Sequence[WorkerPool Response] 
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker_region str
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_zone str
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- clusterManager StringApi Service 
- The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debugOptions Property Map
- Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flexResource StringScheduling Goal 
- Which Flexible Resource Scheduling mode to run in.
- internalExperiments Map<String>
- Experimental settings.
- sdkPipeline Map<String>Options 
- The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- serviceAccount StringEmail 
- Identity to run virtual machines as. Defaults to the default account.
- serviceKms StringKey Name 
- If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- serviceOptions List<String>
- The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffleMode String
- The shuffle mode used for the job.
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- useStreaming BooleanEngine Resource Based Billing 
- Whether the job uses the new streaming engine billing model based on resource usage.
- userAgent Map<String>
- A description of the process that generated the request.
- version Map<String>
- A structure describing which components and their versions of the service are required in order to run the job.
- workerPools List<Property Map>
- The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- workerRegion String
- The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- workerZone String
- The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
ExecutionStageState, ExecutionStageStateArgs      
- CurrentState stringTime 
- The time at which the stage transitioned to this state.
- ExecutionStage stringName 
- The name of the execution stage.
- ExecutionStage Pulumi.State Google Native. Dataflow. V1b3. Execution Stage State Execution Stage State 
- Executions stage states allow the same set of values as JobState.
- CurrentState stringTime 
- The time at which the stage transitioned to this state.
- ExecutionStage stringName 
- The name of the execution stage.
- ExecutionStage ExecutionState Stage State Execution Stage State 
- Executions stage states allow the same set of values as JobState.
- currentState StringTime 
- The time at which the stage transitioned to this state.
- executionStage StringName 
- The name of the execution stage.
- executionStage ExecutionState Stage State Execution Stage State 
- Executions stage states allow the same set of values as JobState.
- currentState stringTime 
- The time at which the stage transitioned to this state.
- executionStage stringName 
- The name of the execution stage.
- executionStage ExecutionState Stage State Execution Stage State 
- Executions stage states allow the same set of values as JobState.
- current_state_ strtime 
- The time at which the stage transitioned to this state.
- execution_stage_ strname 
- The name of the execution stage.
- execution_stage_ Executionstate Stage State Execution Stage State 
- Executions stage states allow the same set of values as JobState.
- currentState StringTime 
- The time at which the stage transitioned to this state.
- executionStage StringName 
- The name of the execution stage.
- executionStage "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"State 
- Executions stage states allow the same set of values as JobState.
ExecutionStageStateExecutionStageState, ExecutionStageStateExecutionStageStateArgs            
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- ExecutionStage State Execution Stage State Job State Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- ExecutionStage State Execution Stage State Job State Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- ExecutionStage State Execution Stage State Job State Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- ExecutionStage State Execution Stage State Job State Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- ExecutionStage State Execution Stage State Job State Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- ExecutionStage State Execution Stage State Job State Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- ExecutionStage State Execution Stage State Job State Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- ExecutionStage State Execution Stage State Job State Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- ExecutionStage State Execution Stage State Job State Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- ExecutionStage State Execution Stage State Job State Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- ExecutionStage State Execution Stage State Job State Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- ExecutionStage State Execution Stage State Job State Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- ExecutionStage State Execution Stage State Job State Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JOB_STATE_UNKNOWN
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JOB_STATE_STOPPED
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JOB_STATE_RUNNING
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JOB_STATE_DONE
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JOB_STATE_FAILED
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_CANCELLED
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JOB_STATE_UPDATED
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_DRAINING
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JOB_STATE_DRAINED
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JOB_STATE_PENDING
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JOB_STATE_CANCELLING
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JOB_STATE_QUEUED
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JOB_STATE_RESOURCE_CLEANING_UP
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- "JOB_STATE_UNKNOWN"
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- "JOB_STATE_STOPPED"
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- "JOB_STATE_RUNNING"
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- "JOB_STATE_DONE"
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- "JOB_STATE_FAILED"
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_CANCELLED"
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- "JOB_STATE_UPDATED"
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_DRAINING"
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- "JOB_STATE_DRAINED"
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- "JOB_STATE_PENDING"
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- "JOB_STATE_CANCELLING"
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- "JOB_STATE_QUEUED"
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- "JOB_STATE_RESOURCE_CLEANING_UP"
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
ExecutionStageStateResponse, ExecutionStageStateResponseArgs        
- CurrentState stringTime 
- The time at which the stage transitioned to this state.
- ExecutionStage stringName 
- The name of the execution stage.
- ExecutionStage stringState 
- Executions stage states allow the same set of values as JobState.
- CurrentState stringTime 
- The time at which the stage transitioned to this state.
- ExecutionStage stringName 
- The name of the execution stage.
- ExecutionStage stringState 
- Executions stage states allow the same set of values as JobState.
- currentState StringTime 
- The time at which the stage transitioned to this state.
- executionStage StringName 
- The name of the execution stage.
- executionStage StringState 
- Executions stage states allow the same set of values as JobState.
- currentState stringTime 
- The time at which the stage transitioned to this state.
- executionStage stringName 
- The name of the execution stage.
- executionStage stringState 
- Executions stage states allow the same set of values as JobState.
- current_state_ strtime 
- The time at which the stage transitioned to this state.
- execution_stage_ strname 
- The name of the execution stage.
- execution_stage_ strstate 
- Executions stage states allow the same set of values as JobState.
- currentState StringTime 
- The time at which the stage transitioned to this state.
- executionStage StringName 
- The name of the execution stage.
- executionStage StringState 
- Executions stage states allow the same set of values as JobState.
ExecutionStageSummary, ExecutionStageSummaryArgs      
- ComponentSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Source> 
- Collections produced and consumed by component transforms of this stage.
- ComponentTransform List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Transform> 
- Transforms that comprise this execution stage.
- Id string
- Dataflow service generated id for this stage.
- InputSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source> 
- Input sources for this stage.
- Kind
Pulumi.Google Native. Dataflow. V1b3. Execution Stage Summary Kind 
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- OutputSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source> 
- Output sources for this stage.
- PrerequisiteStage List<string>
- Other stages that must complete before this stage can run.
- ComponentSource []ComponentSource 
- Collections produced and consumed by component transforms of this stage.
- ComponentTransform []ComponentTransform 
- Transforms that comprise this execution stage.
- Id string
- Dataflow service generated id for this stage.
- InputSource []StageSource 
- Input sources for this stage.
- Kind
ExecutionStage Summary Kind 
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- OutputSource []StageSource 
- Output sources for this stage.
- PrerequisiteStage []string
- Other stages that must complete before this stage can run.
- componentSource List<ComponentSource> 
- Collections produced and consumed by component transforms of this stage.
- componentTransform List<ComponentTransform> 
- Transforms that comprise this execution stage.
- id String
- Dataflow service generated id for this stage.
- inputSource List<StageSource> 
- Input sources for this stage.
- kind
ExecutionStage Summary Kind 
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- outputSource List<StageSource> 
- Output sources for this stage.
- prerequisiteStage List<String>
- Other stages that must complete before this stage can run.
- componentSource ComponentSource[] 
- Collections produced and consumed by component transforms of this stage.
- componentTransform ComponentTransform[] 
- Transforms that comprise this execution stage.
- id string
- Dataflow service generated id for this stage.
- inputSource StageSource[] 
- Input sources for this stage.
- kind
ExecutionStage Summary Kind 
- Type of transform this stage is executing.
- name string
- Dataflow service generated name for this stage.
- outputSource StageSource[] 
- Output sources for this stage.
- prerequisiteStage string[]
- Other stages that must complete before this stage can run.
- component_source Sequence[ComponentSource] 
- Collections produced and consumed by component transforms of this stage.
- component_transform Sequence[ComponentTransform] 
- Transforms that comprise this execution stage.
- id str
- Dataflow service generated id for this stage.
- input_source Sequence[StageSource] 
- Input sources for this stage.
- kind
ExecutionStage Summary Kind 
- Type of transform this stage is executing.
- name str
- Dataflow service generated name for this stage.
- output_source Sequence[StageSource] 
- Output sources for this stage.
- prerequisite_stage Sequence[str]
- Other stages that must complete before this stage can run.
- componentSource List<Property Map>
- Collections produced and consumed by component transforms of this stage.
- componentTransform List<Property Map>
- Transforms that comprise this execution stage.
- id String
- Dataflow service generated id for this stage.
- inputSource List<Property Map>
- Input sources for this stage.
- kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- outputSource List<Property Map>
- Output sources for this stage.
- prerequisiteStage List<String>
- Other stages that must complete before this stage can run.
ExecutionStageSummaryKind, ExecutionStageSummaryKindArgs        
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- ExecutionStage Summary Kind Unknown Kind 
- UNKNOWN_KINDUnrecognized transform type.
- ExecutionStage Summary Kind Par Do Kind 
- PAR_DO_KINDParDo transform.
- ExecutionStage Summary Kind Group By Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- ExecutionStage Summary Kind Flatten Kind 
- FLATTEN_KINDFlatten transform.
- ExecutionStage Summary Kind Read Kind 
- READ_KINDRead transform.
- ExecutionStage Summary Kind Write Kind 
- WRITE_KINDWrite transform.
- ExecutionStage Summary Kind Constant Kind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- ExecutionStage Summary Kind Singleton Kind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ExecutionStage Summary Kind Shuffle Kind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UNKNOWN_KIND
- UNKNOWN_KINDUnrecognized transform type.
- PAR_DO_KIND
- PAR_DO_KINDParDo transform.
- GROUP_BY_KEY_KIND
- GROUP_BY_KEY_KINDGroup By Key transform.
- FLATTEN_KIND
- FLATTEN_KINDFlatten transform.
- READ_KIND
- READ_KINDRead transform.
- WRITE_KIND
- WRITE_KINDWrite transform.
- CONSTANT_KIND
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SINGLETON_KIND
- SINGLETON_KINDCreates a Singleton view of a collection.
- SHUFFLE_KIND
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- "UNKNOWN_KIND"
- UNKNOWN_KINDUnrecognized transform type.
- "PAR_DO_KIND"
- PAR_DO_KINDParDo transform.
- "GROUP_BY_KEY_KIND"
- GROUP_BY_KEY_KINDGroup By Key transform.
- "FLATTEN_KIND"
- FLATTEN_KINDFlatten transform.
- "READ_KIND"
- READ_KINDRead transform.
- "WRITE_KIND"
- WRITE_KINDWrite transform.
- "CONSTANT_KIND"
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- "SINGLETON_KIND"
- SINGLETON_KINDCreates a Singleton view of a collection.
- "SHUFFLE_KIND"
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
ExecutionStageSummaryResponse, ExecutionStageSummaryResponseArgs        
- ComponentSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Source Response> 
- Collections produced and consumed by component transforms of this stage.
- ComponentTransform List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Transform Response> 
- Transforms that comprise this execution stage.
- InputSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source Response> 
- Input sources for this stage.
- Kind string
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- OutputSource List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source Response> 
- Output sources for this stage.
- PrerequisiteStage List<string>
- Other stages that must complete before this stage can run.
- ComponentSource []ComponentSource Response 
- Collections produced and consumed by component transforms of this stage.
- ComponentTransform []ComponentTransform Response 
- Transforms that comprise this execution stage.
- InputSource []StageSource Response 
- Input sources for this stage.
- Kind string
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- OutputSource []StageSource Response 
- Output sources for this stage.
- PrerequisiteStage []string
- Other stages that must complete before this stage can run.
- componentSource List<ComponentSource Response> 
- Collections produced and consumed by component transforms of this stage.
- componentTransform List<ComponentTransform Response> 
- Transforms that comprise this execution stage.
- inputSource List<StageSource Response> 
- Input sources for this stage.
- kind String
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- outputSource List<StageSource Response> 
- Output sources for this stage.
- prerequisiteStage List<String>
- Other stages that must complete before this stage can run.
- componentSource ComponentSource Response[] 
- Collections produced and consumed by component transforms of this stage.
- componentTransform ComponentTransform Response[] 
- Transforms that comprise this execution stage.
- inputSource StageSource Response[] 
- Input sources for this stage.
- kind string
- Type of transform this stage is executing.
- name string
- Dataflow service generated name for this stage.
- outputSource StageSource Response[] 
- Output sources for this stage.
- prerequisiteStage string[]
- Other stages that must complete before this stage can run.
- component_source Sequence[ComponentSource Response] 
- Collections produced and consumed by component transforms of this stage.
- component_transform Sequence[ComponentTransform Response] 
- Transforms that comprise this execution stage.
- input_source Sequence[StageSource Response] 
- Input sources for this stage.
- kind str
- Type of transform this stage is executing.
- name str
- Dataflow service generated name for this stage.
- output_source Sequence[StageSource Response] 
- Output sources for this stage.
- prerequisite_stage Sequence[str]
- Other stages that must complete before this stage can run.
- componentSource List<Property Map>
- Collections produced and consumed by component transforms of this stage.
- componentTransform List<Property Map>
- Transforms that comprise this execution stage.
- inputSource List<Property Map>
- Input sources for this stage.
- kind String
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- outputSource List<Property Map>
- Output sources for this stage.
- prerequisiteStage List<String>
- Other stages that must complete before this stage can run.
FileIODetails, FileIODetailsArgs    
- FilePattern string
- File Pattern used to access files by the connector.
- FilePattern string
- File Pattern used to access files by the connector.
- filePattern String
- File Pattern used to access files by the connector.
- filePattern string
- File Pattern used to access files by the connector.
- file_pattern str
- File Pattern used to access files by the connector.
- filePattern String
- File Pattern used to access files by the connector.
FileIODetailsResponse, FileIODetailsResponseArgs      
- FilePattern string
- File Pattern used to access files by the connector.
- FilePattern string
- File Pattern used to access files by the connector.
- filePattern String
- File Pattern used to access files by the connector.
- filePattern string
- File Pattern used to access files by the connector.
- file_pattern str
- File Pattern used to access files by the connector.
- filePattern String
- File Pattern used to access files by the connector.
JobCurrentState, JobCurrentStateArgs      
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobCurrent State Job State Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobCurrent State Job State Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobCurrent State Job State Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobCurrent State Job State Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobCurrent State Job State Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobCurrent State Job State Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobCurrent State Job State Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobCurrent State Job State Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobCurrent State Job State Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobCurrent State Job State Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobCurrent State Job State Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobCurrent State Job State Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobCurrent State Job State Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JOB_STATE_UNKNOWN
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JOB_STATE_STOPPED
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JOB_STATE_RUNNING
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JOB_STATE_DONE
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JOB_STATE_FAILED
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_CANCELLED
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JOB_STATE_UPDATED
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_DRAINING
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JOB_STATE_DRAINED
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JOB_STATE_PENDING
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JOB_STATE_CANCELLING
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JOB_STATE_QUEUED
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JOB_STATE_RESOURCE_CLEANING_UP
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- "JOB_STATE_UNKNOWN"
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- "JOB_STATE_STOPPED"
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- "JOB_STATE_RUNNING"
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- "JOB_STATE_DONE"
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- "JOB_STATE_FAILED"
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_CANCELLED"
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- "JOB_STATE_UPDATED"
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_DRAINING"
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- "JOB_STATE_DRAINED"
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- "JOB_STATE_PENDING"
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- "JOB_STATE_CANCELLING"
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- "JOB_STATE_QUEUED"
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- "JOB_STATE_RESOURCE_CLEANING_UP"
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
JobExecutionInfo, JobExecutionInfoArgs      
- Stages Dictionary<string, string>
- A mapping from each stage to the information about that stage.
- Stages map[string]string
- A mapping from each stage to the information about that stage.
- stages Map<String,String>
- A mapping from each stage to the information about that stage.
- stages {[key: string]: string}
- A mapping from each stage to the information about that stage.
- stages Mapping[str, str]
- A mapping from each stage to the information about that stage.
- stages Map<String>
- A mapping from each stage to the information about that stage.
JobExecutionInfoResponse, JobExecutionInfoResponseArgs        
- Stages Dictionary<string, string>
- A mapping from each stage to the information about that stage.
- Stages map[string]string
- A mapping from each stage to the information about that stage.
- stages Map<String,String>
- A mapping from each stage to the information about that stage.
- stages {[key: string]: string}
- A mapping from each stage to the information about that stage.
- stages Mapping[str, str]
- A mapping from each stage to the information about that stage.
- stages Map<String>
- A mapping from each stage to the information about that stage.
JobMetadata, JobMetadataArgs    
- BigTable List<Pulumi.Details Google Native. Dataflow. V1b3. Inputs. Big Table IODetails> 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- BigqueryDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Big Query IODetails> 
- Identification of a BigQuery source used in the Dataflow job.
- DatastoreDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Datastore IODetails> 
- Identification of a Datastore source used in the Dataflow job.
- FileDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. File IODetails> 
- Identification of a File source used in the Dataflow job.
- PubsubDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Pub Sub IODetails> 
- Identification of a Pub/Sub source used in the Dataflow job.
- SdkVersion Pulumi.Google Native. Dataflow. V1b3. Inputs. Sdk Version 
- The SDK version used to run the job.
- SpannerDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Spanner IODetails> 
- Identification of a Spanner source used in the Dataflow job.
- UserDisplay Dictionary<string, string>Properties 
- List of display properties to help UI filter jobs.
- BigTable []BigDetails Table IODetails 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- BigqueryDetails []BigQuery IODetails 
- Identification of a BigQuery source used in the Dataflow job.
- DatastoreDetails []DatastoreIODetails 
- Identification of a Datastore source used in the Dataflow job.
- FileDetails []FileIODetails 
- Identification of a File source used in the Dataflow job.
- PubsubDetails []PubSub IODetails 
- Identification of a Pub/Sub source used in the Dataflow job.
- SdkVersion SdkVersion 
- The SDK version used to run the job.
- SpannerDetails []SpannerIODetails 
- Identification of a Spanner source used in the Dataflow job.
- UserDisplay map[string]stringProperties 
- List of display properties to help UI filter jobs.
- bigTable List<BigDetails Table IODetails> 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails List<BigQuery IODetails> 
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails List<DatastoreIODetails> 
- Identification of a Datastore source used in the Dataflow job.
- fileDetails List<FileIODetails> 
- Identification of a File source used in the Dataflow job.
- pubsubDetails List<PubSub IODetails> 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion SdkVersion 
- The SDK version used to run the job.
- spannerDetails List<SpannerIODetails> 
- Identification of a Spanner source used in the Dataflow job.
- userDisplay Map<String,String>Properties 
- List of display properties to help UI filter jobs.
- bigTable BigDetails Table IODetails[] 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails BigQuery IODetails[] 
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails DatastoreIODetails[] 
- Identification of a Datastore source used in the Dataflow job.
- fileDetails FileIODetails[] 
- Identification of a File source used in the Dataflow job.
- pubsubDetails PubSub IODetails[] 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion SdkVersion 
- The SDK version used to run the job.
- spannerDetails SpannerIODetails[] 
- Identification of a Spanner source used in the Dataflow job.
- userDisplay {[key: string]: string}Properties 
- List of display properties to help UI filter jobs.
- big_table_ Sequence[Bigdetails Table IODetails] 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery_details Sequence[BigQuery IODetails] 
- Identification of a BigQuery source used in the Dataflow job.
- datastore_details Sequence[DatastoreIODetails] 
- Identification of a Datastore source used in the Dataflow job.
- file_details Sequence[FileIODetails] 
- Identification of a File source used in the Dataflow job.
- pubsub_details Sequence[PubSub IODetails] 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdk_version SdkVersion 
- The SDK version used to run the job.
- spanner_details Sequence[SpannerIODetails] 
- Identification of a Spanner source used in the Dataflow job.
- user_display_ Mapping[str, str]properties 
- List of display properties to help UI filter jobs.
- bigTable List<Property Map>Details 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails List<Property Map>
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails List<Property Map>
- Identification of a Datastore source used in the Dataflow job.
- fileDetails List<Property Map>
- Identification of a File source used in the Dataflow job.
- pubsubDetails List<Property Map>
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion Property Map
- The SDK version used to run the job.
- spannerDetails List<Property Map>
- Identification of a Spanner source used in the Dataflow job.
- userDisplay Map<String>Properties 
- List of display properties to help UI filter jobs.
JobMetadataResponse, JobMetadataResponseArgs      
- BigTable List<Pulumi.Details Google Native. Dataflow. V1b3. Inputs. Big Table IODetails Response> 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- BigqueryDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Big Query IODetails Response> 
- Identification of a BigQuery source used in the Dataflow job.
- DatastoreDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Datastore IODetails Response> 
- Identification of a Datastore source used in the Dataflow job.
- FileDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. File IODetails Response> 
- Identification of a File source used in the Dataflow job.
- PubsubDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Pub Sub IODetails Response> 
- Identification of a Pub/Sub source used in the Dataflow job.
- SdkVersion Pulumi.Google Native. Dataflow. V1b3. Inputs. Sdk Version Response 
- The SDK version used to run the job.
- SpannerDetails List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Spanner IODetails Response> 
- Identification of a Spanner source used in the Dataflow job.
- UserDisplay Dictionary<string, string>Properties 
- List of display properties to help UI filter jobs.
- BigTable []BigDetails Table IODetails Response 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- BigqueryDetails []BigQuery IODetails Response 
- Identification of a BigQuery source used in the Dataflow job.
- DatastoreDetails []DatastoreIODetails Response 
- Identification of a Datastore source used in the Dataflow job.
- FileDetails []FileIODetails Response 
- Identification of a File source used in the Dataflow job.
- PubsubDetails []PubSub IODetails Response 
- Identification of a Pub/Sub source used in the Dataflow job.
- SdkVersion SdkVersion Response 
- The SDK version used to run the job.
- SpannerDetails []SpannerIODetails Response 
- Identification of a Spanner source used in the Dataflow job.
- UserDisplay map[string]stringProperties 
- List of display properties to help UI filter jobs.
- bigTable List<BigDetails Table IODetails Response> 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails List<BigQuery IODetails Response> 
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails List<DatastoreIODetails Response> 
- Identification of a Datastore source used in the Dataflow job.
- fileDetails List<FileIODetails Response> 
- Identification of a File source used in the Dataflow job.
- pubsubDetails List<PubSub IODetails Response> 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion SdkVersion Response 
- The SDK version used to run the job.
- spannerDetails List<SpannerIODetails Response> 
- Identification of a Spanner source used in the Dataflow job.
- userDisplay Map<String,String>Properties 
- List of display properties to help UI filter jobs.
- bigTable BigDetails Table IODetails Response[] 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails BigQuery IODetails Response[] 
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails DatastoreIODetails Response[] 
- Identification of a Datastore source used in the Dataflow job.
- fileDetails FileIODetails Response[] 
- Identification of a File source used in the Dataflow job.
- pubsubDetails PubSub IODetails Response[] 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion SdkVersion Response 
- The SDK version used to run the job.
- spannerDetails SpannerIODetails Response[] 
- Identification of a Spanner source used in the Dataflow job.
- userDisplay {[key: string]: string}Properties 
- List of display properties to help UI filter jobs.
- big_table_ Sequence[Bigdetails Table IODetails Response] 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery_details Sequence[BigQuery IODetails Response] 
- Identification of a BigQuery source used in the Dataflow job.
- datastore_details Sequence[DatastoreIODetails Response] 
- Identification of a Datastore source used in the Dataflow job.
- file_details Sequence[FileIODetails Response] 
- Identification of a File source used in the Dataflow job.
- pubsub_details Sequence[PubSub IODetails Response] 
- Identification of a Pub/Sub source used in the Dataflow job.
- sdk_version SdkVersion Response 
- The SDK version used to run the job.
- spanner_details Sequence[SpannerIODetails Response] 
- Identification of a Spanner source used in the Dataflow job.
- user_display_ Mapping[str, str]properties 
- List of display properties to help UI filter jobs.
- bigTable List<Property Map>Details 
- Identification of a Cloud Bigtable source used in the Dataflow job.
- bigqueryDetails List<Property Map>
- Identification of a BigQuery source used in the Dataflow job.
- datastoreDetails List<Property Map>
- Identification of a Datastore source used in the Dataflow job.
- fileDetails List<Property Map>
- Identification of a File source used in the Dataflow job.
- pubsubDetails List<Property Map>
- Identification of a Pub/Sub source used in the Dataflow job.
- sdkVersion Property Map
- The SDK version used to run the job.
- spannerDetails List<Property Map>
- Identification of a Spanner source used in the Dataflow job.
- userDisplay Map<String>Properties 
- List of display properties to help UI filter jobs.
JobRequestedState, JobRequestedStateArgs      
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobRequested State Job State Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobRequested State Job State Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobRequested State Job State Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobRequested State Job State Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobRequested State Job State Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobRequested State Job State Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobRequested State Job State Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobRequested State Job State Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobRequested State Job State Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobRequested State Job State Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobRequested State Job State Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobRequested State Job State Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobRequested State Job State Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JobState Unknown 
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JobState Stopped 
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JobState Running 
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JobState Done 
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JobState Failed 
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Cancelled 
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JobState Updated 
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JobState Draining 
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JobState Drained 
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JobState Pending 
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JobState Cancelling 
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JobState Queued 
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JobState Resource Cleaning Up 
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- JOB_STATE_UNKNOWN
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- JOB_STATE_STOPPED
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- JOB_STATE_RUNNING
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- JOB_STATE_DONE
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- JOB_STATE_FAILED
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_CANCELLED
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- JOB_STATE_UPDATED
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- JOB_STATE_DRAINING
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- JOB_STATE_DRAINED
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- JOB_STATE_PENDING
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- JOB_STATE_CANCELLING
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- JOB_STATE_QUEUED
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- JOB_STATE_RESOURCE_CLEANING_UP
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
- "JOB_STATE_UNKNOWN"
- JOB_STATE_UNKNOWNThe job's run state isn't specified.
- "JOB_STATE_STOPPED"
- JOB_STATE_STOPPEDJOB_STATE_STOPPEDindicates that the job has not yet started to run.
- "JOB_STATE_RUNNING"
- JOB_STATE_RUNNINGJOB_STATE_RUNNINGindicates that the job is currently running.
- "JOB_STATE_DONE"
- JOB_STATE_DONEJOB_STATE_DONEindicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition fromJOB_STATE_RUNNING. It may also be set via a Cloud DataflowUpdateJobcall, if the job has not yet reached a terminal state.
- "JOB_STATE_FAILED"
- JOB_STATE_FAILEDJOB_STATE_FAILEDindicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_CANCELLED"
- JOB_STATE_CANCELLEDJOB_STATE_CANCELLEDindicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud DataflowUpdateJobcall, and only if the job has not yet reached another terminal state.
- "JOB_STATE_UPDATED"
- JOB_STATE_UPDATEDJOB_STATE_UPDATEDindicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_RUNNING.
- "JOB_STATE_DRAINING"
- JOB_STATE_DRAININGJOB_STATE_DRAININGindicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud DataflowUpdateJobcall, but only as a transition fromJOB_STATE_RUNNING. Jobs that are draining may only transition toJOB_STATE_DRAINED,JOB_STATE_CANCELLED, orJOB_STATE_FAILED.
- "JOB_STATE_DRAINED"
- JOB_STATE_DRAINEDJOB_STATE_DRAINEDindicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition fromJOB_STATE_DRAINING.
- "JOB_STATE_PENDING"
- JOB_STATE_PENDINGJOB_STATE_PENDINGindicates that the job has been created but is not yet running. Jobs that are pending may only transition toJOB_STATE_RUNNING, orJOB_STATE_FAILED.
- "JOB_STATE_CANCELLING"
- JOB_STATE_CANCELLINGJOB_STATE_CANCELLINGindicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition toJOB_STATE_CANCELLEDorJOB_STATE_FAILED.
- "JOB_STATE_QUEUED"
- JOB_STATE_QUEUEDJOB_STATE_QUEUEDindicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition toJOB_STATE_PENDINGorJOB_STATE_CANCELLED.
- "JOB_STATE_RESOURCE_CLEANING_UP"
- JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UPindicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
JobType, JobTypeArgs    
- JobType Unknown 
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- JobType Batch 
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JobType Streaming 
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
- JobType Job Type Unknown 
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- JobType Job Type Batch 
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JobType Job Type Streaming 
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
- JobType Unknown 
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- JobType Batch 
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JobType Streaming 
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
- JobType Unknown 
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- JobType Batch 
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JobType Streaming 
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
- JOB_TYPE_UNKNOWN
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- JOB_TYPE_BATCH
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- JOB_TYPE_STREAMING
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
- "JOB_TYPE_UNKNOWN"
- JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
- "JOB_TYPE_BATCH"
- JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- "JOB_TYPE_STREAMING"
- JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
Package, PackageArgs  
PackageResponse, PackageResponseArgs    
PipelineDescription, PipelineDescriptionArgs    
- DisplayData List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data> 
- Pipeline level display data.
- ExecutionPipeline List<Pulumi.Stage Google Native. Dataflow. V1b3. Inputs. Execution Stage Summary> 
- Description of each stage of execution of the pipeline.
- OriginalPipeline List<Pulumi.Transform Google Native. Dataflow. V1b3. Inputs. Transform Summary> 
- Description of each transform in the pipeline and collections between them.
- StepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- DisplayData []DisplayData 
- Pipeline level display data.
- ExecutionPipeline []ExecutionStage Stage Summary 
- Description of each stage of execution of the pipeline.
- OriginalPipeline []TransformTransform Summary 
- Description of each transform in the pipeline and collections between them.
- StepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData List<DisplayData> 
- Pipeline level display data.
- executionPipeline List<ExecutionStage Stage Summary> 
- Description of each stage of execution of the pipeline.
- originalPipeline List<TransformTransform Summary> 
- Description of each transform in the pipeline and collections between them.
- stepNames StringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData DisplayData[] 
- Pipeline level display data.
- executionPipeline ExecutionStage Stage Summary[] 
- Description of each stage of execution of the pipeline.
- originalPipeline TransformTransform Summary[] 
- Description of each transform in the pipeline and collections between them.
- stepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- display_data Sequence[DisplayData] 
- Pipeline level display data.
- execution_pipeline_ Sequence[Executionstage Stage Summary] 
- Description of each stage of execution of the pipeline.
- original_pipeline_ Sequence[Transformtransform Summary] 
- Description of each transform in the pipeline and collections between them.
- step_names_ strhash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData List<Property Map>
- Pipeline level display data.
- executionPipeline List<Property Map>Stage 
- Description of each stage of execution of the pipeline.
- originalPipeline List<Property Map>Transform 
- Description of each transform in the pipeline and collections between them.
- stepNames StringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
PipelineDescriptionResponse, PipelineDescriptionResponseArgs      
- DisplayData List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data Response> 
- Pipeline level display data.
- ExecutionPipeline List<Pulumi.Stage Google Native. Dataflow. V1b3. Inputs. Execution Stage Summary Response> 
- Description of each stage of execution of the pipeline.
- OriginalPipeline List<Pulumi.Transform Google Native. Dataflow. V1b3. Inputs. Transform Summary Response> 
- Description of each transform in the pipeline and collections between them.
- StepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- DisplayData []DisplayData Response 
- Pipeline level display data.
- ExecutionPipeline []ExecutionStage Stage Summary Response 
- Description of each stage of execution of the pipeline.
- OriginalPipeline []TransformTransform Summary Response 
- Description of each transform in the pipeline and collections between them.
- StepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData List<DisplayData Response> 
- Pipeline level display data.
- executionPipeline List<ExecutionStage Stage Summary Response> 
- Description of each stage of execution of the pipeline.
- originalPipeline List<TransformTransform Summary Response> 
- Description of each transform in the pipeline and collections between them.
- stepNames StringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData DisplayData Response[] 
- Pipeline level display data.
- executionPipeline ExecutionStage Stage Summary Response[] 
- Description of each stage of execution of the pipeline.
- originalPipeline TransformTransform Summary Response[] 
- Description of each transform in the pipeline and collections between them.
- stepNames stringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
- display_data Sequence[DisplayData Response] 
- Pipeline level display data.
- execution_pipeline_ Sequence[Executionstage Stage Summary Response] 
- Description of each stage of execution of the pipeline.
- original_pipeline_ Sequence[Transformtransform Summary Response] 
- Description of each transform in the pipeline and collections between them.
- step_names_ strhash 
- A hash value of the submitted pipeline portable graph step names if exists.
- displayData List<Property Map>
- Pipeline level display data.
- executionPipeline List<Property Map>Stage 
- Description of each stage of execution of the pipeline.
- originalPipeline List<Property Map>Transform 
- Description of each transform in the pipeline and collections between them.
- stepNames StringHash 
- A hash value of the submitted pipeline portable graph step names if exists.
PubSubIODetails, PubSubIODetailsArgs      
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
- subscription string
- Subscription used in the connection.
- topic string
- Topic accessed in the connection.
- subscription str
- Subscription used in the connection.
- topic str
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
PubSubIODetailsResponse, PubSubIODetailsResponseArgs        
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
- subscription string
- Subscription used in the connection.
- topic string
- Topic accessed in the connection.
- subscription str
- Subscription used in the connection.
- topic str
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
RuntimeUpdatableParams, RuntimeUpdatableParamsArgs      
- MaxNum intWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- MinNum intWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- MaxNum intWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- MinNum intWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum IntegerWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum IntegerWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum numberWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum numberWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max_num_ intworkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min_num_ intworkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum NumberWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum NumberWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
RuntimeUpdatableParamsResponse, RuntimeUpdatableParamsResponseArgs        
- MaxNum intWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- MinNum intWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- MaxNum intWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- MinNum intWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum IntegerWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum IntegerWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum numberWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum numberWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max_num_ intworkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min_num_ intworkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- maxNum NumberWorkers 
- The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- minNum NumberWorkers 
- The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
SdkBugResponse, SdkBugResponseArgs      
SdkHarnessContainerImage, SdkHarnessContainerImageArgs        
- Capabilities List<string>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- ContainerImage string
- A docker container image that resides in Google Container Registry.
- EnvironmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- UseSingle boolCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- Capabilities []string
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- ContainerImage string
- A docker container image that resides in Google Container Registry.
- EnvironmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- UseSingle boolCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage String
- A docker container image that resides in Google Container Registry.
- environmentId String
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle BooleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities string[]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage string
- A docker container image that resides in Google Container Registry.
- environmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle booleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities Sequence[str]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container_image str
- A docker container image that resides in Google Container Registry.
- environment_id str
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use_single_ boolcore_ per_ container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage String
- A docker container image that resides in Google Container Registry.
- environmentId String
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle BooleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
SdkHarnessContainerImageResponse, SdkHarnessContainerImageResponseArgs          
- Capabilities List<string>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- ContainerImage string
- A docker container image that resides in Google Container Registry.
- EnvironmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- UseSingle boolCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- Capabilities []string
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- ContainerImage string
- A docker container image that resides in Google Container Registry.
- EnvironmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- UseSingle boolCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage String
- A docker container image that resides in Google Container Registry.
- environmentId String
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle BooleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities string[]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage string
- A docker container image that resides in Google Container Registry.
- environmentId string
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle booleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities Sequence[str]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container_image str
- A docker container image that resides in Google Container Registry.
- environment_id str
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use_single_ boolcore_ per_ container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- containerImage String
- A docker container image that resides in Google Container Registry.
- environmentId String
- Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- useSingle BooleanCore Per Container 
- If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
SdkVersion, SdkVersionArgs    
- SdkSupport Pulumi.Status Google Native. Dataflow. V1b3. Sdk Version Sdk Support Status 
- The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- VersionDisplay stringName 
- A readable string describing the version of the SDK.
- SdkSupport SdkStatus Version Sdk Support Status 
- The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- VersionDisplay stringName 
- A readable string describing the version of the SDK.
- sdkSupport SdkStatus Version Sdk Support Status 
- The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- versionDisplay StringName 
- A readable string describing the version of the SDK.
- sdkSupport SdkStatus Version Sdk Support Status 
- The support status for this SDK version.
- version string
- The version of the SDK used to run the job.
- versionDisplay stringName 
- A readable string describing the version of the SDK.
- sdk_support_ Sdkstatus Version Sdk Support Status 
- The support status for this SDK version.
- version str
- The version of the SDK used to run the job.
- version_display_ strname 
- A readable string describing the version of the SDK.
- sdkSupport "UNKNOWN" | "SUPPORTED" | "STALE" | "DEPRECATED" | "UNSUPPORTED"Status 
- The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- versionDisplay StringName 
- A readable string describing the version of the SDK.
SdkVersionResponse, SdkVersionResponseArgs      
- Bugs
List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Sdk Bug Response> 
- Known bugs found in this SDK version.
- SdkSupport stringStatus 
- The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- VersionDisplay stringName 
- A readable string describing the version of the SDK.
- Bugs
[]SdkBug Response 
- Known bugs found in this SDK version.
- SdkSupport stringStatus 
- The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- VersionDisplay stringName 
- A readable string describing the version of the SDK.
- bugs
List<SdkBug Response> 
- Known bugs found in this SDK version.
- sdkSupport StringStatus 
- The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- versionDisplay StringName 
- A readable string describing the version of the SDK.
- bugs
SdkBug Response[] 
- Known bugs found in this SDK version.
- sdkSupport stringStatus 
- The support status for this SDK version.
- version string
- The version of the SDK used to run the job.
- versionDisplay stringName 
- A readable string describing the version of the SDK.
- bugs
Sequence[SdkBug Response] 
- Known bugs found in this SDK version.
- sdk_support_ strstatus 
- The support status for this SDK version.
- version str
- The version of the SDK used to run the job.
- version_display_ strname 
- A readable string describing the version of the SDK.
- bugs List<Property Map>
- Known bugs found in this SDK version.
- sdkSupport StringStatus 
- The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- versionDisplay StringName 
- A readable string describing the version of the SDK.
SdkVersionSdkSupportStatus, SdkVersionSdkSupportStatusArgs          
- Unknown
- UNKNOWNCloud Dataflow is unaware of this version.
- Supported
- SUPPORTEDThis is a known version of an SDK, and is supported.
- Stale
- STALEA newer version of the SDK family exists, and an update is recommended.
- Deprecated
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- Unsupported
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
- SdkVersion Sdk Support Status Unknown 
- UNKNOWNCloud Dataflow is unaware of this version.
- SdkVersion Sdk Support Status Supported 
- SUPPORTEDThis is a known version of an SDK, and is supported.
- SdkVersion Sdk Support Status Stale 
- STALEA newer version of the SDK family exists, and an update is recommended.
- SdkVersion Sdk Support Status Deprecated 
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- SdkVersion Sdk Support Status Unsupported 
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
- Unknown
- UNKNOWNCloud Dataflow is unaware of this version.
- Supported
- SUPPORTEDThis is a known version of an SDK, and is supported.
- Stale
- STALEA newer version of the SDK family exists, and an update is recommended.
- Deprecated
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- Unsupported
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
- Unknown
- UNKNOWNCloud Dataflow is unaware of this version.
- Supported
- SUPPORTEDThis is a known version of an SDK, and is supported.
- Stale
- STALEA newer version of the SDK family exists, and an update is recommended.
- Deprecated
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- Unsupported
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
- UNKNOWN
- UNKNOWNCloud Dataflow is unaware of this version.
- SUPPORTED
- SUPPORTEDThis is a known version of an SDK, and is supported.
- STALE
- STALEA newer version of the SDK family exists, and an update is recommended.
- DEPRECATED
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- UNSUPPORTED
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
- "UNKNOWN"
- UNKNOWNCloud Dataflow is unaware of this version.
- "SUPPORTED"
- SUPPORTEDThis is a known version of an SDK, and is supported.
- "STALE"
- STALEA newer version of the SDK family exists, and an update is recommended.
- "DEPRECATED"
- DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
- "UNSUPPORTED"
- UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
SpannerIODetails, SpannerIODetailsArgs    
- DatabaseId string
- DatabaseId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- DatabaseId string
- DatabaseId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- databaseId String
- DatabaseId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- databaseId string
- DatabaseId accessed in the connection.
- instanceId string
- InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- database_id str
- DatabaseId accessed in the connection.
- instance_id str
- InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- databaseId String
- DatabaseId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
SpannerIODetailsResponse, SpannerIODetailsResponseArgs      
- DatabaseId string
- DatabaseId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- DatabaseId string
- DatabaseId accessed in the connection.
- InstanceId string
- InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- databaseId String
- DatabaseId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- databaseId string
- DatabaseId accessed in the connection.
- instanceId string
- InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- database_id str
- DatabaseId accessed in the connection.
- instance_id str
- InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- databaseId String
- DatabaseId accessed in the connection.
- instanceId String
- InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
StageSource, StageSourceArgs    
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- SizeBytes string
- Size of the source, if measurable.
- UserName string
- Human-readable name for this source; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- SizeBytes string
- Size of the source, if measurable.
- UserName string
- Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes String
- Size of the source, if measurable.
- userName String
- Human-readable name for this source; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes string
- Size of the source, if measurable.
- userName string
- Human-readable name for this source; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform_ stror_ collection 
- User name for the original user transform or collection with which this source is most closely associated.
- size_bytes str
- Size of the source, if measurable.
- user_name str
- Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes String
- Size of the source, if measurable.
- userName String
- Human-readable name for this source; may be user or system generated.
StageSourceResponse, StageSourceResponseArgs      
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- SizeBytes string
- Size of the source, if measurable.
- UserName string
- Human-readable name for this source; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- OriginalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- SizeBytes string
- Size of the source, if measurable.
- UserName string
- Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes String
- Size of the source, if measurable.
- userName String
- Human-readable name for this source; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- originalTransform stringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes string
- Size of the source, if measurable.
- userName string
- Human-readable name for this source; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_transform_ stror_ collection 
- User name for the original user transform or collection with which this source is most closely associated.
- size_bytes str
- Size of the source, if measurable.
- user_name str
- Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- originalTransform StringOr Collection 
- User name for the original user transform or collection with which this source is most closely associated.
- sizeBytes String
- Size of the source, if measurable.
- userName String
- Human-readable name for this source; may be user or system generated.
Step, StepArgs  
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties Dictionary<string, string>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties map[string]string
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String,String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind string
- The kind of step in the Cloud Dataflow job.
- name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties {[key: string]: string}
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind str
- The kind of step in the Cloud Dataflow job.
- name str
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Mapping[str, str]
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
StepResponse, StepResponseArgs    
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties Dictionary<string, string>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties map[string]string
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String,String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind string
- The kind of step in the Cloud Dataflow job.
- name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties {[key: string]: string}
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind str
- The kind of step in the Cloud Dataflow job.
- name str
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Mapping[str, str]
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
TaskRunnerSettings, TaskRunnerSettingsArgs      
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- BaseTask stringDir 
- The location on the worker for task-specific subdirectories.
- BaseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- CommandlinesFile stringName 
- The file to store preprocessing commands in.
- ContinueOn boolException 
- Whether to continue taskrunner if an exception is hit.
- DataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- HarnessCommand string
- The command to launch the worker harness.
- LanguageHint string
- The suggested backend language.
- LogDir string
- The directory on the VM to store logs.
- LogTo boolSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- LogUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- OauthScopes List<string>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- ParallelWorker Pulumi.Settings Google Native. Dataflow. V1b3. Inputs. Worker Settings 
- The settings to pass to the parallel worker harness.
- StreamingWorker stringMain Class 
- The streaming worker main class name.
- TaskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- TaskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- TempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- VmId string
- The ID string of the VM.
- WorkflowFile stringName 
- The file to store the workflow in.
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- BaseTask stringDir 
- The location on the worker for task-specific subdirectories.
- BaseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- CommandlinesFile stringName 
- The file to store preprocessing commands in.
- ContinueOn boolException 
- Whether to continue taskrunner if an exception is hit.
- DataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- HarnessCommand string
- The command to launch the worker harness.
- LanguageHint string
- The suggested backend language.
- LogDir string
- The directory on the VM to store logs.
- LogTo boolSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- LogUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- OauthScopes []string
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- ParallelWorker WorkerSettings Settings 
- The settings to pass to the parallel worker harness.
- StreamingWorker stringMain Class 
- The streaming worker main class name.
- TaskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- TaskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- TempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- VmId string
- The ID string of the VM.
- WorkflowFile stringName 
- The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- baseTask StringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl String
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile StringName 
- The file to store preprocessing commands in.
- continueOn BooleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi StringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand String
- The command to launch the worker harness.
- languageHint String
- The suggested backend language.
- logDir String
- The directory on the VM to store logs.
- logTo BooleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload StringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes List<String>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker WorkerSettings Settings 
- The settings to pass to the parallel worker harness.
- streamingWorker StringMain Class 
- The streaming worker main class name.
- taskGroup String
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser String
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage StringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId String
- The ID string of the VM.
- workflowFile StringName 
- The file to store the workflow in.
- alsologtostderr boolean
- Whether to also send taskrunner log info to stderr.
- baseTask stringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile stringName 
- The file to store preprocessing commands in.
- continueOn booleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand string
- The command to launch the worker harness.
- languageHint string
- The suggested backend language.
- logDir string
- The directory on the VM to store logs.
- logTo booleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes string[]
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker WorkerSettings Settings 
- The settings to pass to the parallel worker harness.
- streamingWorker stringMain Class 
- The streaming worker main class name.
- taskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId string
- The ID string of the VM.
- workflowFile stringName 
- The file to store the workflow in.
- alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- base_task_ strdir 
- The location on the worker for task-specific subdirectories.
- base_url str
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines_file_ strname 
- The file to store preprocessing commands in.
- continue_on_ boolexception 
- Whether to continue taskrunner if an exception is hit.
- dataflow_api_ strversion 
- The API version of endpoint, e.g. "v1b3"
- harness_command str
- The command to launch the worker harness.
- language_hint str
- The suggested backend language.
- log_dir str
- The directory on the VM to store logs.
- log_to_ boolserialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log_upload_ strlocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth_scopes Sequence[str]
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel_worker_ Workersettings Settings 
- The settings to pass to the parallel worker harness.
- streaming_worker_ strmain_ class 
- The streaming worker main class name.
- task_group str
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task_user str
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp_storage_ strprefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm_id str
- The ID string of the VM.
- workflow_file_ strname 
- The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- baseTask StringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl String
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile StringName 
- The file to store preprocessing commands in.
- continueOn BooleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi StringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand String
- The command to launch the worker harness.
- languageHint String
- The suggested backend language.
- logDir String
- The directory on the VM to store logs.
- logTo BooleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload StringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes List<String>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker Property MapSettings 
- The settings to pass to the parallel worker harness.
- streamingWorker StringMain Class 
- The streaming worker main class name.
- taskGroup String
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser String
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage StringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId String
- The ID string of the VM.
- workflowFile StringName 
- The file to store the workflow in.
TaskRunnerSettingsResponse, TaskRunnerSettingsResponseArgs        
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- BaseTask stringDir 
- The location on the worker for task-specific subdirectories.
- BaseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- CommandlinesFile stringName 
- The file to store preprocessing commands in.
- ContinueOn boolException 
- Whether to continue taskrunner if an exception is hit.
- DataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- HarnessCommand string
- The command to launch the worker harness.
- LanguageHint string
- The suggested backend language.
- LogDir string
- The directory on the VM to store logs.
- LogTo boolSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- LogUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- OauthScopes List<string>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- ParallelWorker Pulumi.Settings Google Native. Dataflow. V1b3. Inputs. Worker Settings Response 
- The settings to pass to the parallel worker harness.
- StreamingWorker stringMain Class 
- The streaming worker main class name.
- TaskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- TaskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- TempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- VmId string
- The ID string of the VM.
- WorkflowFile stringName 
- The file to store the workflow in.
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- BaseTask stringDir 
- The location on the worker for task-specific subdirectories.
- BaseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- CommandlinesFile stringName 
- The file to store preprocessing commands in.
- ContinueOn boolException 
- Whether to continue taskrunner if an exception is hit.
- DataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- HarnessCommand string
- The command to launch the worker harness.
- LanguageHint string
- The suggested backend language.
- LogDir string
- The directory on the VM to store logs.
- LogTo boolSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- LogUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- OauthScopes []string
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- ParallelWorker WorkerSettings Settings Response 
- The settings to pass to the parallel worker harness.
- StreamingWorker stringMain Class 
- The streaming worker main class name.
- TaskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- TaskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- TempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- VmId string
- The ID string of the VM.
- WorkflowFile stringName 
- The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- baseTask StringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl String
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile StringName 
- The file to store preprocessing commands in.
- continueOn BooleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi StringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand String
- The command to launch the worker harness.
- languageHint String
- The suggested backend language.
- logDir String
- The directory on the VM to store logs.
- logTo BooleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload StringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes List<String>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker WorkerSettings Settings Response 
- The settings to pass to the parallel worker harness.
- streamingWorker StringMain Class 
- The streaming worker main class name.
- taskGroup String
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser String
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage StringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId String
- The ID string of the VM.
- workflowFile StringName 
- The file to store the workflow in.
- alsologtostderr boolean
- Whether to also send taskrunner log info to stderr.
- baseTask stringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl string
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile stringName 
- The file to store preprocessing commands in.
- continueOn booleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi stringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand string
- The command to launch the worker harness.
- languageHint string
- The suggested backend language.
- logDir string
- The directory on the VM to store logs.
- logTo booleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload stringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes string[]
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker WorkerSettings Settings Response 
- The settings to pass to the parallel worker harness.
- streamingWorker stringMain Class 
- The streaming worker main class name.
- taskGroup string
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser string
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage stringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId string
- The ID string of the VM.
- workflowFile stringName 
- The file to store the workflow in.
- alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- base_task_ strdir 
- The location on the worker for task-specific subdirectories.
- base_url str
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines_file_ strname 
- The file to store preprocessing commands in.
- continue_on_ boolexception 
- Whether to continue taskrunner if an exception is hit.
- dataflow_api_ strversion 
- The API version of endpoint, e.g. "v1b3"
- harness_command str
- The command to launch the worker harness.
- language_hint str
- The suggested backend language.
- log_dir str
- The directory on the VM to store logs.
- log_to_ boolserialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log_upload_ strlocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth_scopes Sequence[str]
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel_worker_ Workersettings Settings Response 
- The settings to pass to the parallel worker harness.
- streaming_worker_ strmain_ class 
- The streaming worker main class name.
- task_group str
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task_user str
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp_storage_ strprefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm_id str
- The ID string of the VM.
- workflow_file_ strname 
- The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- baseTask StringDir 
- The location on the worker for task-specific subdirectories.
- baseUrl String
- The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlinesFile StringName 
- The file to store preprocessing commands in.
- continueOn BooleanException 
- Whether to continue taskrunner if an exception is hit.
- dataflowApi StringVersion 
- The API version of endpoint, e.g. "v1b3"
- harnessCommand String
- The command to launch the worker harness.
- languageHint String
- The suggested backend language.
- logDir String
- The directory on the VM to store logs.
- logTo BooleanSerialconsole 
- Whether to send taskrunner log info to Google Compute Engine VM serial console.
- logUpload StringLocation 
- Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauthScopes List<String>
- The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallelWorker Property MapSettings 
- The settings to pass to the parallel worker harness.
- streamingWorker StringMain Class 
- The streaming worker main class name.
- taskGroup String
- The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- taskUser String
- The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- tempStorage StringPrefix 
- The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vmId String
- The ID string of the VM.
- workflowFile StringName 
- The file to store the workflow in.
TransformSummary, TransformSummaryArgs    
- DisplayData List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data> 
- Transform-specific display data.
- Id string
- SDK generated id of this transform instance.
- InputCollection List<string>Name 
- User names for all collection inputs to this transform.
- Kind
Pulumi.Google Native. Dataflow. V1b3. Transform Summary Kind 
- Type of transform.
- Name string
- User provided name for this transform instance.
- OutputCollection List<string>Name 
- User names for all collection outputs to this transform.
- DisplayData []DisplayData 
- Transform-specific display data.
- Id string
- SDK generated id of this transform instance.
- InputCollection []stringName 
- User names for all collection inputs to this transform.
- Kind
TransformSummary Kind 
- Type of transform.
- Name string
- User provided name for this transform instance.
- OutputCollection []stringName 
- User names for all collection outputs to this transform.
- displayData List<DisplayData> 
- Transform-specific display data.
- id String
- SDK generated id of this transform instance.
- inputCollection List<String>Name 
- User names for all collection inputs to this transform.
- kind
TransformSummary Kind 
- Type of transform.
- name String
- User provided name for this transform instance.
- outputCollection List<String>Name 
- User names for all collection outputs to this transform.
- displayData DisplayData[] 
- Transform-specific display data.
- id string
- SDK generated id of this transform instance.
- inputCollection string[]Name 
- User names for all collection inputs to this transform.
- kind
TransformSummary Kind 
- Type of transform.
- name string
- User provided name for this transform instance.
- outputCollection string[]Name 
- User names for all collection outputs to this transform.
- display_data Sequence[DisplayData] 
- Transform-specific display data.
- id str
- SDK generated id of this transform instance.
- input_collection_ Sequence[str]name 
- User names for all collection inputs to this transform.
- kind
TransformSummary Kind 
- Type of transform.
- name str
- User provided name for this transform instance.
- output_collection_ Sequence[str]name 
- User names for all collection outputs to this transform.
- displayData List<Property Map>
- Transform-specific display data.
- id String
- SDK generated id of this transform instance.
- inputCollection List<String>Name 
- User names for all collection inputs to this transform.
- kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"
- Type of transform.
- name String
- User provided name for this transform instance.
- outputCollection List<String>Name 
- User names for all collection outputs to this transform.
TransformSummaryKind, TransformSummaryKindArgs      
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- TransformSummary Kind Unknown Kind 
- UNKNOWN_KINDUnrecognized transform type.
- TransformSummary Kind Par Do Kind 
- PAR_DO_KINDParDo transform.
- TransformSummary Kind Group By Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- TransformSummary Kind Flatten Kind 
- FLATTEN_KINDFlatten transform.
- TransformSummary Kind Read Kind 
- READ_KINDRead transform.
- TransformSummary Kind Write Kind 
- WRITE_KINDWrite transform.
- TransformSummary Kind Constant Kind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- TransformSummary Kind Singleton Kind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- TransformSummary Kind Shuffle Kind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UnknownKind 
- UNKNOWN_KINDUnrecognized transform type.
- ParDo Kind 
- PAR_DO_KINDParDo transform.
- GroupBy Key Kind 
- GROUP_BY_KEY_KINDGroup By Key transform.
- FlattenKind 
- FLATTEN_KINDFlatten transform.
- ReadKind 
- READ_KINDRead transform.
- WriteKind 
- WRITE_KINDWrite transform.
- ConstantKind 
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SingletonKind 
- SINGLETON_KINDCreates a Singleton view of a collection.
- ShuffleKind 
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- UNKNOWN_KIND
- UNKNOWN_KINDUnrecognized transform type.
- PAR_DO_KIND
- PAR_DO_KINDParDo transform.
- GROUP_BY_KEY_KIND
- GROUP_BY_KEY_KINDGroup By Key transform.
- FLATTEN_KIND
- FLATTEN_KINDFlatten transform.
- READ_KIND
- READ_KINDRead transform.
- WRITE_KIND
- WRITE_KINDWrite transform.
- CONSTANT_KIND
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- SINGLETON_KIND
- SINGLETON_KINDCreates a Singleton view of a collection.
- SHUFFLE_KIND
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
- "UNKNOWN_KIND"
- UNKNOWN_KINDUnrecognized transform type.
- "PAR_DO_KIND"
- PAR_DO_KINDParDo transform.
- "GROUP_BY_KEY_KIND"
- GROUP_BY_KEY_KINDGroup By Key transform.
- "FLATTEN_KIND"
- FLATTEN_KINDFlatten transform.
- "READ_KIND"
- READ_KINDRead transform.
- "WRITE_KIND"
- WRITE_KINDWrite transform.
- "CONSTANT_KIND"
- CONSTANT_KINDConstructs from a constant value, such as with Create.of.
- "SINGLETON_KIND"
- SINGLETON_KINDCreates a Singleton view of a collection.
- "SHUFFLE_KIND"
- SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
TransformSummaryResponse, TransformSummaryResponseArgs      
- DisplayData List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data Response> 
- Transform-specific display data.
- InputCollection List<string>Name 
- User names for all collection inputs to this transform.
- Kind string
- Type of transform.
- Name string
- User provided name for this transform instance.
- OutputCollection List<string>Name 
- User names for all collection outputs to this transform.
- DisplayData []DisplayData Response 
- Transform-specific display data.
- InputCollection []stringName 
- User names for all collection inputs to this transform.
- Kind string
- Type of transform.
- Name string
- User provided name for this transform instance.
- OutputCollection []stringName 
- User names for all collection outputs to this transform.
- displayData List<DisplayData Response> 
- Transform-specific display data.
- inputCollection List<String>Name 
- User names for all collection inputs to this transform.
- kind String
- Type of transform.
- name String
- User provided name for this transform instance.
- outputCollection List<String>Name 
- User names for all collection outputs to this transform.
- displayData DisplayData Response[] 
- Transform-specific display data.
- inputCollection string[]Name 
- User names for all collection inputs to this transform.
- kind string
- Type of transform.
- name string
- User provided name for this transform instance.
- outputCollection string[]Name 
- User names for all collection outputs to this transform.
- display_data Sequence[DisplayData Response] 
- Transform-specific display data.
- input_collection_ Sequence[str]name 
- User names for all collection inputs to this transform.
- kind str
- Type of transform.
- name str
- User provided name for this transform instance.
- output_collection_ Sequence[str]name 
- User names for all collection outputs to this transform.
- displayData List<Property Map>
- Transform-specific display data.
- inputCollection List<String>Name 
- User names for all collection inputs to this transform.
- kind String
- Type of transform.
- name String
- User provided name for this transform instance.
- outputCollection List<String>Name 
- User names for all collection outputs to this transform.
WorkerPool, WorkerPoolArgs    
- WorkerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- AutoscalingSettings Pulumi.Google Native. Dataflow. V1b3. Inputs. Autoscaling Settings 
- Settings for autoscaling of this WorkerPool.
- DataDisks List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Disk> 
- Data disks that are used by a VM in this workflow.
- DefaultPackage Pulumi.Set Google Native. Dataflow. V1b3. Worker Pool Default Package Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- DiskSize intGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskSource stringImage 
- Fully qualified source image for disks.
- DiskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- IpConfiguration Pulumi.Google Native. Dataflow. V1b3. Worker Pool Ip Configuration 
- Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- MachineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata Dictionary<string, string>
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumThreads intPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- NumWorkers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- OnHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages
List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Package> 
- Packages to be installed on workers.
- PoolArgs Dictionary<string, string>
- Extra arguments for this worker pool.
- SdkHarness List<Pulumi.Container Images Google Native. Dataflow. V1b3. Inputs. Sdk Harness Container Image> 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- TaskrunnerSettings Pulumi.Google Native. Dataflow. V1b3. Inputs. Task Runner Settings 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- TeardownPolicy Pulumi.Google Native. Dataflow. V1b3. Worker Pool Teardown Policy 
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- WorkerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- AutoscalingSettings AutoscalingSettings 
- Settings for autoscaling of this WorkerPool.
- DataDisks []Disk
- Data disks that are used by a VM in this workflow.
- DefaultPackage WorkerSet Pool Default Package Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- DiskSize intGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskSource stringImage 
- Fully qualified source image for disks.
- DiskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- IpConfiguration WorkerPool Ip Configuration 
- Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- MachineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata map[string]string
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumThreads intPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- NumWorkers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- OnHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages []Package
- Packages to be installed on workers.
- PoolArgs map[string]string
- Extra arguments for this worker pool.
- SdkHarness []SdkContainer Images Harness Container Image 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- TaskrunnerSettings TaskRunner Settings 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- TeardownPolicy WorkerPool Teardown Policy 
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- workerHarness StringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- autoscalingSettings AutoscalingSettings 
- Settings for autoscaling of this WorkerPool.
- dataDisks List<Disk>
- Data disks that are used by a VM in this workflow.
- defaultPackage WorkerSet Pool Default Package Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize IntegerGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource StringImage 
- Fully qualified source image for disks.
- diskType String
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration WorkerPool Ip Configuration 
- Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType String
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String,String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads IntegerPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers Integer
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost StringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages List<Package>
- Packages to be installed on workers.
- poolArgs Map<String,String>
- Extra arguments for this worker pool.
- sdkHarness List<SdkContainer Images Harness Container Image> 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings TaskRunner Settings 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy WorkerPool Teardown Policy 
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- workerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- autoscalingSettings AutoscalingSettings 
- Settings for autoscaling of this WorkerPool.
- dataDisks Disk[]
- Data disks that are used by a VM in this workflow.
- defaultPackage WorkerSet Pool Default Package Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize numberGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource stringImage 
- Fully qualified source image for disks.
- diskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration WorkerPool Ip Configuration 
- Configuration for VM IPs.
- kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata {[key: string]: string}
- Metadata to set on the Google Compute Engine VMs.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads numberPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers number
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages Package[]
- Packages to be installed on workers.
- poolArgs {[key: string]: string}
- Extra arguments for this worker pool.
- sdkHarness SdkContainer Images Harness Container Image[] 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings TaskRunner Settings 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy WorkerPool Teardown Policy 
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- worker_harness_ strcontainer_ image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- autoscaling_settings AutoscalingSettings 
- Settings for autoscaling of this WorkerPool.
- data_disks Sequence[Disk]
- Data disks that are used by a VM in this workflow.
- default_package_ Workerset Pool Default Package Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk_size_ intgb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_source_ strimage 
- Fully qualified source image for disks.
- disk_type str
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip_configuration WorkerPool Ip Configuration 
- Configuration for VM IPs.
- kind str
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machine_type str
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Mapping[str, str]
- Metadata to set on the Google Compute Engine VMs.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_threads_ intper_ worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num_workers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on_host_ strmaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages Sequence[Package]
- Packages to be installed on workers.
- pool_args Mapping[str, str]
- Extra arguments for this worker pool.
- sdk_harness_ Sequence[Sdkcontainer_ images Harness Container Image] 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner_settings TaskRunner Settings 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown_policy WorkerPool Teardown Policy 
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- zone str
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- workerHarness StringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- autoscalingSettings Property Map
- Settings for autoscaling of this WorkerPool.
- dataDisks List<Property Map>
- Data disks that are used by a VM in this workflow.
- defaultPackage "DEFAULT_PACKAGE_SET_UNKNOWN" | "DEFAULT_PACKAGE_SET_NONE" | "DEFAULT_PACKAGE_SET_JAVA" | "DEFAULT_PACKAGE_SET_PYTHON"Set 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize NumberGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource StringImage 
- Fully qualified source image for disks.
- diskType String
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE"
- Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType String
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads NumberPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers Number
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost StringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages List<Property Map>
- Packages to be installed on workers.
- poolArgs Map<String>
- Extra arguments for this worker pool.
- sdkHarness List<Property Map>Container Images 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings Property Map
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy "TEARDOWN_POLICY_UNKNOWN" | "TEARDOWN_ALWAYS" | "TEARDOWN_ON_SUCCESS" | "TEARDOWN_NEVER"
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
WorkerPoolDefaultPackageSet, WorkerPoolDefaultPackageSetArgs          
- DefaultPackage Set Unknown 
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- DefaultPackage Set None 
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- DefaultPackage Set Java 
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- DefaultPackage Set Python 
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
- WorkerPool Default Package Set Default Package Set Unknown 
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- WorkerPool Default Package Set Default Package Set None 
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- WorkerPool Default Package Set Default Package Set Java 
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- WorkerPool Default Package Set Default Package Set Python 
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
- DefaultPackage Set Unknown 
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- DefaultPackage Set None 
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- DefaultPackage Set Java 
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- DefaultPackage Set Python 
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
- DefaultPackage Set Unknown 
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- DefaultPackage Set None 
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- DefaultPackage Set Java 
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- DefaultPackage Set Python 
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
- DEFAULT_PACKAGE_SET_UNKNOWN
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- DEFAULT_PACKAGE_SET_NONE
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- DEFAULT_PACKAGE_SET_JAVA
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- DEFAULT_PACKAGE_SET_PYTHON
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
- "DEFAULT_PACKAGE_SET_UNKNOWN"
- DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
- "DEFAULT_PACKAGE_SET_NONE"
- DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
- "DEFAULT_PACKAGE_SET_JAVA"
- DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
- "DEFAULT_PACKAGE_SET_PYTHON"
- DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
WorkerPoolIpConfiguration, WorkerPoolIpConfigurationArgs        
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerPool Ip Configuration Worker Ip Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerPool Ip Configuration Worker Ip Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerPool Ip Configuration Worker Ip Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WorkerIp Unspecified 
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WorkerIp Public 
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WorkerIp Private 
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- WORKER_IP_UNSPECIFIED
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- WORKER_IP_PUBLIC
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- WORKER_IP_PRIVATE
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
- "WORKER_IP_UNSPECIFIED"
- WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
- "WORKER_IP_PUBLIC"
- WORKER_IP_PUBLICWorkers should have public IP addresses.
- "WORKER_IP_PRIVATE"
- WORKER_IP_PRIVATEWorkers should have private IP addresses.
WorkerPoolResponse, WorkerPoolResponseArgs      
- AutoscalingSettings Pulumi.Google Native. Dataflow. V1b3. Inputs. Autoscaling Settings Response 
- Settings for autoscaling of this WorkerPool.
- DataDisks List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Disk Response> 
- Data disks that are used by a VM in this workflow.
- DefaultPackage stringSet 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- DiskSize intGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskSource stringImage 
- Fully qualified source image for disks.
- DiskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- IpConfiguration string
- Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- MachineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata Dictionary<string, string>
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumThreads intPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- NumWorkers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- OnHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages
List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Package Response> 
- Packages to be installed on workers.
- PoolArgs Dictionary<string, string>
- Extra arguments for this worker pool.
- SdkHarness List<Pulumi.Container Images Google Native. Dataflow. V1b3. Inputs. Sdk Harness Container Image Response> 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- TaskrunnerSettings Pulumi.Google Native. Dataflow. V1b3. Inputs. Task Runner Settings Response 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- TeardownPolicy string
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- WorkerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- AutoscalingSettings AutoscalingSettings Response 
- Settings for autoscaling of this WorkerPool.
- DataDisks []DiskResponse 
- Data disks that are used by a VM in this workflow.
- DefaultPackage stringSet 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- DiskSize intGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- DiskSource stringImage 
- Fully qualified source image for disks.
- DiskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- IpConfiguration string
- Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- MachineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata map[string]string
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- NumThreads intPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- NumWorkers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- OnHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages
[]PackageResponse 
- Packages to be installed on workers.
- PoolArgs map[string]string
- Extra arguments for this worker pool.
- SdkHarness []SdkContainer Images Harness Container Image Response 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- TaskrunnerSettings TaskRunner Settings Response 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- TeardownPolicy string
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- WorkerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscalingSettings AutoscalingSettings Response 
- Settings for autoscaling of this WorkerPool.
- dataDisks List<DiskResponse> 
- Data disks that are used by a VM in this workflow.
- defaultPackage StringSet 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize IntegerGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource StringImage 
- Fully qualified source image for disks.
- diskType String
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration String
- Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType String
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String,String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads IntegerPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers Integer
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost StringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
List<PackageResponse> 
- Packages to be installed on workers.
- poolArgs Map<String,String>
- Extra arguments for this worker pool.
- sdkHarness List<SdkContainer Images Harness Container Image Response> 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings TaskRunner Settings Response 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy String
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- workerHarness StringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscalingSettings AutoscalingSettings Response 
- Settings for autoscaling of this WorkerPool.
- dataDisks DiskResponse[] 
- Data disks that are used by a VM in this workflow.
- defaultPackage stringSet 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize numberGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource stringImage 
- Fully qualified source image for disks.
- diskType string
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration string
- Configuration for VM IPs.
- kind string
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType string
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata {[key: string]: string}
- Metadata to set on the Google Compute Engine VMs.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads numberPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers number
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost stringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
PackageResponse[] 
- Packages to be installed on workers.
- poolArgs {[key: string]: string}
- Extra arguments for this worker pool.
- sdkHarness SdkContainer Images Harness Container Image Response[] 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings TaskRunner Settings Response 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy string
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- workerHarness stringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscaling_settings AutoscalingSettings Response 
- Settings for autoscaling of this WorkerPool.
- data_disks Sequence[DiskResponse] 
- Data disks that are used by a VM in this workflow.
- default_package_ strset 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk_size_ intgb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_source_ strimage 
- Fully qualified source image for disks.
- disk_type str
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip_configuration str
- Configuration for VM IPs.
- kind str
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machine_type str
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Mapping[str, str]
- Metadata to set on the Google Compute Engine VMs.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_threads_ intper_ worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num_workers int
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on_host_ strmaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
Sequence[PackageResponse] 
- Packages to be installed on workers.
- pool_args Mapping[str, str]
- Extra arguments for this worker pool.
- sdk_harness_ Sequence[Sdkcontainer_ images Harness Container Image Response] 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner_settings TaskRunner Settings Response 
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown_policy str
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- worker_harness_ strcontainer_ image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone str
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscalingSettings Property Map
- Settings for autoscaling of this WorkerPool.
- dataDisks List<Property Map>
- Data disks that are used by a VM in this workflow.
- defaultPackage StringSet 
- The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- diskSize NumberGb 
- Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- diskSource StringImage 
- Fully qualified source image for disks.
- diskType String
- Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ipConfiguration String
- Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only harnessandshuffleare supported.
- machineType String
- Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- numThreads NumberPer Worker 
- The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- numWorkers Number
- Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- onHost StringMaintenance 
- The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages List<Property Map>
- Packages to be installed on workers.
- poolArgs Map<String>
- Extra arguments for this worker pool.
- sdkHarness List<Property Map>Container Images 
- Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunnerSettings Property Map
- Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardownPolicy String
- Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS,TEARDOWN_ON_SUCCESS, andTEARDOWN_NEVER.TEARDOWN_ALWAYSmeans workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESSmeans workers are torn down if the job succeeds.TEARDOWN_NEVERmeans the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYSpolicy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
- workerHarness StringContainer Image 
- Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
WorkerPoolTeardownPolicy, WorkerPoolTeardownPolicyArgs        
- TeardownPolicy Unknown 
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- TeardownAlways 
- TEARDOWN_ALWAYSAlways teardown the resource.
- TeardownOn Success 
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- TeardownNever 
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
- WorkerPool Teardown Policy Teardown Policy Unknown 
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- WorkerPool Teardown Policy Teardown Always 
- TEARDOWN_ALWAYSAlways teardown the resource.
- WorkerPool Teardown Policy Teardown On Success 
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- WorkerPool Teardown Policy Teardown Never 
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
- TeardownPolicy Unknown 
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- TeardownAlways 
- TEARDOWN_ALWAYSAlways teardown the resource.
- TeardownOn Success 
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- TeardownNever 
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
- TeardownPolicy Unknown 
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- TeardownAlways 
- TEARDOWN_ALWAYSAlways teardown the resource.
- TeardownOn Success 
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- TeardownNever 
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
- TEARDOWN_POLICY_UNKNOWN
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- TEARDOWN_ALWAYS
- TEARDOWN_ALWAYSAlways teardown the resource.
- TEARDOWN_ON_SUCCESS
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- TEARDOWN_NEVER
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
- "TEARDOWN_POLICY_UNKNOWN"
- TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
- "TEARDOWN_ALWAYS"
- TEARDOWN_ALWAYSAlways teardown the resource.
- "TEARDOWN_ON_SUCCESS"
- TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
- "TEARDOWN_NEVER"
- TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
WorkerSettings, WorkerSettingsArgs    
- BaseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- ReportingEnabled bool
- Whether to send work progress updates to the service.
- ServicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- ShuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- WorkerId string
- The ID of the worker running this pipeline.
- BaseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- ReportingEnabled bool
- Whether to send work progress updates to the service.
- ServicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- ShuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- WorkerId string
- The ID of the worker running this pipeline.
- baseUrl String
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled Boolean
- Whether to send work progress updates to the service.
- servicePath String
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService StringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId String
- The ID of the worker running this pipeline.
- baseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled boolean
- Whether to send work progress updates to the service.
- servicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId string
- The ID of the worker running this pipeline.
- base_url str
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting_enabled bool
- Whether to send work progress updates to the service.
- service_path str
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle_service_ strpath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp_storage_ strprefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker_id str
- The ID of the worker running this pipeline.
- baseUrl String
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled Boolean
- Whether to send work progress updates to the service.
- servicePath String
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService StringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId String
- The ID of the worker running this pipeline.
WorkerSettingsResponse, WorkerSettingsResponseArgs      
- BaseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- ReportingEnabled bool
- Whether to send work progress updates to the service.
- ServicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- ShuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- WorkerId string
- The ID of the worker running this pipeline.
- BaseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- ReportingEnabled bool
- Whether to send work progress updates to the service.
- ServicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- ShuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- TempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- WorkerId string
- The ID of the worker running this pipeline.
- baseUrl String
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled Boolean
- Whether to send work progress updates to the service.
- servicePath String
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService StringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId String
- The ID of the worker running this pipeline.
- baseUrl string
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled boolean
- Whether to send work progress updates to the service.
- servicePath string
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService stringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage stringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId string
- The ID of the worker running this pipeline.
- base_url str
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting_enabled bool
- Whether to send work progress updates to the service.
- service_path str
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle_service_ strpath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp_storage_ strprefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker_id str
- The ID of the worker running this pipeline.
- baseUrl String
- The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reportingEnabled Boolean
- Whether to send work progress updates to the service.
- servicePath String
- The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffleService StringPath 
- The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- tempStorage StringPrefix 
- The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- workerId String
- The ID of the worker running this pipeline.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.