Skip to content

job

Module to interact with jobs.

This module contains class to interact with a job in EMS data integration.

Typical usage example:

```python
job = data_pool.get_job(job_id)
job.name = "NEW_NAME"
job.update()

job.execute()

data_model_execution = job.get_data_model_execution("<EXECUTION_ID_HERE>")

job.execute(
    transformation_ids=[transformation.id],
    data_model_execution_configurations=[
        DataModelExecutionConfiguration(
            data_model_execution_id=data_model_execution.id,
            tables=[t.id for t in data_model_execution.tables],
        )
    ],
)

job.delete()
```

Job

Bases: JobTransport

Data job object to interact with data job specific data integration endpoints.

time_stamp class-attribute instance-attribute

time_stamp = Field(alias='timeStamp')

status class-attribute instance-attribute

status = Field(alias='status')

current_execution_id class-attribute instance-attribute

current_execution_id = Field(alias='currentExecutionId')

dag_based_execution_enabled class-attribute instance-attribute

dag_based_execution_enabled = Field(
    alias="dagBasedExecutionEnabled"
)

latest_execution_item_id class-attribute instance-attribute

latest_execution_item_id = Field(
    alias="latestExecutionItemId"
)

client class-attribute instance-attribute

client = Field(..., exclude=True)

id instance-attribute

id

Id of job.

data_pool_id instance-attribute

data_pool_id

Id of data pool where job is located.

data_source_id instance-attribute

data_source_id

Id of data connection where job is located.

name instance-attribute

name

Name of job.

from_transport classmethod

from_transport(client, job_transport)

Creates high-level job object from given JobTransport.

Parameters:

  • client (Client) –

    Client to use to make API calls for given job.

  • job_transport (JobTransport) –

    JobTransport object containing properties of job.

Returns:

  • Job

    A Job object with properties from transport and given client.

update

update()

Pushes local changes of job to EMS and updates properties with response from EMS.

sync

sync()

Syncs job properties with EMS.

delete

delete()

Deletes job.

copy_to

copy_to(
    destination_team_domain,
    destination_data_pool_id,
    destination_data_source_id=None,
    **kwargs
)

Copies data job to the specified domain in the same realm.

Parameters:

  • destination_team_domain (str) –

    The of the destination team url, https://..celonis.cloud/

  • destination_data_pool_id (str) –

    The id of the destination data pool.

  • destination_data_source_id (Optional[str], default: None ) –

    (Optional) The id of the destination data connection. By default, the global data source is used.

  • **kwargs (Any, default: {} ) –

    Additional parameters set for JobCopyRequestTransport

Returns:

  • JobTransport

    A read-only job transport object of the copied asset.

Examples:

job_copy = job.copy_to(
    'celonis-team-domain',
    'zd6a18r1-171e-4f94-bdf1-c58b61251d37',
    'bd2e1854-851e-4a96-s0f5-d08z68423e48'
)

create_task

create_task(name, task_type, description=None, **kwargs)

Creates new task with name in given data job.

Parameters:

  • name (str) –

    Name of new task.

  • task_type (TaskType) –

    Type of new task.

  • description (Optional[str], default: None ) –

    Description of new task.

  • **kwargs (Any, default: {} ) –

    Additional parameters set for NewTaskInstanceTransport object.

Returns:

  • Task

    A Task object for newly created task.

create_transformation

create_transformation(name, description=None, **kwargs)

Creates new transformation task with name in given data job.

Parameters:

  • name (str) –

    Name of new transformation task.

  • description (Optional[str], default: None ) –

    Description of new transformation task

  • **kwargs (Any, default: {} ) –

    Additional parameters set for NewTaskInstanceTransport object.

Returns:

  • Task

    A Task object for newly created transformation task.

Examples:

Create data job with transformation statement and execute it:

data_job = data_pool.create_job("PyCelonis Tutorial Job")

task = data_job.create_transformation(
    name="PyCelonis Tutorial Task",
    description="This is an example task"
)

task.update_statement(\"\"\"
    DROP TABLE IF EXISTS ACTIVITIES;
    CREATE TABLE ACTIVITIES (
        _CASE_KEY VARCHAR(100),
        ACTIVITY_EN VARCHAR(300)
    );
\"\"\")

data_job.execute()

create_extraction

create_extraction(name, **kwargs)

Creates new extraction task with name in given data job.

Parameters:

  • name (str) –

    Name of new extraction task.

  • **kwargs (Any, default: {} ) –

    Additional parameters set for NewTaskInstanceTransport object.

Returns:

  • Task

    A Task object for newly created extraction task.

get_task

get_task(id_)

Gets task with given id.

Parameters:

  • id_ (str) –

    Id of task.

Returns:

  • Task

    A Task object for task with given id.

get_tasks

get_tasks()

Gets all tasks of given data job.

Returns:

get_transformations

get_transformations()

Gets all transformations of given data job.

Returns:

get_extractions

get_extractions()

Gets all extractions of given data job.

Returns:

execute

execute(
    transformation_ids=None,
    extraction_configurations=None,
    data_model_execution_configurations=None,
    mode=ExtractionMode.DELTA,
    wait=True,
    **kwargs
)

Executes job with given transformations, extractions, and data model executions.

Parameters:

  • transformation_ids (Optional[List[str]], default: None ) –

    Ids of transformations to use. Default is None which executes all transformations.

  • extraction_configurations (Optional[List[ExtractionConfiguration]], default: None ) –

    Extraction configurations to define which extractions to execute. Default is None which executes all extractions.

  • data_model_execution_configurations (Optional[List[DataModelExecutionConfiguration]], default: None ) –

    Data model execution configurations to define which data model reloads to execute. Default is None which executes all data model reloads. If configuration is given it needs to contain all table ids to reload.

  • mode (ExtractionMode, default: DELTA ) –

    Extraction mode. Default is DELTA.

  • wait (bool, default: True ) –

    If true, function only returns once data job has been executed and raises error if it fails. It does only wait until the job has finished and not until all data models have been reloaded. If false, function returns after triggering execution and does not raise errors in case it failed.

  • **kwargs (Any, default: {} ) –

    Additional parameters set for JobExecutionConfiguration object.

Examples:

Create data job with transformation statement and execute it:

data_job = data_pool.create_job("PyCelonis Tutorial Job")

task = data_job.create_transformation(
    name="PyCelonis Tutorial Task",
    description="This is an example task"
)

task.update_statement(\"\"\"
    DROP TABLE IF EXISTS ACTIVITIES;
    CREATE TABLE ACTIVITIES (
        _CASE_KEY VARCHAR(100),
        ACTIVITY_EN VARCHAR(300)
    );
\"\"\")

data_job.execute()

cancel_execution

cancel_execution()

Cancels the execution of the job.

create_data_model_execution

create_data_model_execution(
    data_model_id, table_ids=None, **kwargs
)

Creates data model execution in given data job.

Parameters:

  • data_model_id (str) –

    Id of data model to reload.

  • table_ids (Optional[List[str]], default: None ) –

    Ids of tables to reload. Defaults to full load.

  • **kwargs (Any, default: {} ) –

    Additional parameters set for DataModelExecutionTransport object.

Returns:

  • DataModelExecution

    A DataModelExecution object for newly created data model execution.

get_data_model_execution

get_data_model_execution(id_)

Gets data model execution with given id.

Parameters:

  • id_ (Optional[str]) –

    Id of data model execution.

Returns:

  • DataModelExecution

    A DataModelExecution object for data model execution with given id.

get_data_model_executions

get_data_model_executions()

Gets all data model executions of given data job.

Returns: