Data Jobs¶
In this tutorial, you will learn how to interact with data jobs and tasks via PyCelonis. More specifically, you will learn:
- For what data jobs and tasks are used in the EMS
- How to create new data jobs and tasks
- How to create and retrieve statements from tasks
Prerequisites¶
To follow this tutorial, you should have already created a data pool inside the EMS and should have uploaded data into it. If you haven't done this yet, please complete at a minimum the Data Integration - Introduction and the Data Push tutorials first. However, it is recommended to also complete the other Data Integration tutorials, as this tutorial will build upon the tables created in the previous tutorials.
Tutorial¶
Data jobs are used to define ETL pipelines for data pools. They consist of one or multiple tasks that specify how data is extracted from source systems and how it is transformed to yield the final data pool tables, which can be loaded into a data model.
Note:
PyCelonis currently does not support extraction tasks. Hence, we will focus mainly on transformation tasks in this tutorial.
Note:
Transformation tasks are used to modify data pool tables and cannot be used to modify data model tables. Data model tables are simply references to the corresponding data pool tables and are read-only. Hence, data model tables can only be used to retrieve data. If we want to modify tables, we need to do this over the corresponding data pool tables. When we now reload the data model, the changes from the data pool tables will be ingested into the data model tables.
1. Import PyCelonis and connect to Celonis API¶
from pycelonis import get_celonis
celonis = get_celonis(permissions=False)
[2023-06-26 13:10:20,165] INFO: No `base_url` given. Using environment variable 'CELONIS_URL' [2023-06-26 13:10:20,169] INFO: No `api_token` given. Using environment variable 'CELONIS_API_TOKEN'
[2023-06-26 13:10:20,254] WARNING: KeyType is not set. Defaulted to 'APP_KEY'.
[2023-06-26 13:10:20,256] INFO: Initial connect successful! PyCelonis Version: 2.3.0
2. Find corresponding data pool¶
Since data jobs and tasks are used to modify tables inside a data pool, we have to navigate to the corresponding pool, in which we want to create our data jobs and tasks. Here, we will use the data pool created in the previous tutorials:
data_pool = celonis.data_integration.get_data_pools().find("PyCelonis Tutorial Data Pool")
data_pool
DataPool(id='6c178afe-21e2-4f77-b862-e37653ae0b2e', name='PyCelonis Tutorial Data Pool')
3. Create data jobs and tasks¶
The first step in working with data jobs and tasks is to have an empty data job in place, which will serve as a place to store and organize different extraction and transformation tasks. A new data job can be created with the create_job()
command:
data_job = data_pool.create_job("PyCelonis Tutorial Data Job")
data_job
Since we have already created an empty data job in the Data Integration - Introduction tutorial, we can simply access this data job by calling the get_jobs().find()
method:
data_job = data_pool.get_jobs().find("PyCelonis Tutorial Data Job")
data_job
Job(id='cde6d25f-337a-4320-8565-acdbba93a99b', name='PyCelonis Tutorial Data Job', data_pool_id='6c178afe-21e2-4f77-b862-e37653ae0b2e')
After having an empty data job in place, we can start adding tasks into it. PyCelonis currently only supports creating transformation tasks. This can be done by calling the create_transformation()
method, which takes as input arguments:
Name | Type | Description | Default |
---|---|---|---|
name |
str |
Name of the transformation task | Required |
description |
str |
Custom description what the transformation task does | None |
task = data_job.create_transformation(name="PyCelonis Tutorial Task",
description="This is an example task created for the PyCelonis tutorial")
task
[2023-06-26 13:10:20,323] INFO: Successfully created task of type 'TRANSFORMATION' with id '54ecc8b9-810c-4bf5-8385-6e0c8d01c35f'
Transformation(id='54ecc8b9-810c-4bf5-8385-6e0c8d01c35f', pool_id='6c178afe-21e2-4f77-b862-e37653ae0b2e', task_id='dbcf2311-2aa5-45f2-8cd7-f68c5c6c0570', task_type=<TaskType.TRANSFORMATION: 'TRANSFORMATION'>, name='PyCelonis Tutorial Task', job_id='cde6d25f-337a-4320-8565-acdbba93a99b', disabled=False)
To verify that the transformation task exists in the data job, we can call the get_tasks()
method:
data_job.get_tasks()
[ Transformation(id='54ecc8b9-810c-4bf5-8385-6e0c8d01c35f', pool_id='6c178afe-21e2-4f77-b862-e37653ae0b2e', task_id='dbcf2311-2aa5-45f2-8cd7-f68c5c6c0570', task_type=<TaskType.TRANSFORMATION: 'TRANSFORMATION'>, name='PyCelonis Tutorial Task', job_id='cde6d25f-337a-4320-8565-acdbba93a99b', disabled=False) ]
4. Create and retrieve statements from tasks¶
The newly-created transformation task is currently empty. Hence, we need to specify a statement first how the data should be transformed. Transformation statements in Celonis are written in Vertica SQL. A transformation statement for a specific task can be created and updated by calling the update_statement()
method and passing the Vertica SQL query as input argument.
Here, we will create a simple Vertica SQL statement that creates a new activity table ACTIVITIES_6
and inserts for each case (i.e. purchase order item) with a puchasing document category 'F' an activity 'Create Purchase Requisition Item'.
task.update_statement("""
DROP TABLE IF EXISTS ACTIVITIES_6;
CREATE TABLE ACTIVITIES_6 (
_CASE_KEY VARCHAR(100),
ACTIVITY_EN VARCHAR(300)
);
INSERT INTO ACTIVITIES_6(
_CASE_KEY,
ACTIVITY_EN
)
SELECT
EKPO.MANDT || EKPO.EBELN || EKPO.EBELP AS _CASE_KEY,
'Create Purchase Requisition Item' AS ACTIVITY_EN
FROM EKPO
JOIN EKKO ON 1=1
AND EKPO.MANDT = EKKO.MANDT
AND EKKO.EBELN = EKPO.EBELN
AND EKKO.BSTYP = 'F'
;
""")
[2023-06-26 13:10:20,364] INFO: Successfully updated statement of task with id '54ecc8b9-810c-4bf5-8385-6e0c8d01c35f'
However, it is also possible to create way more complex transformation tasks in Vertica SQL. Some of the possible Vertica statements include CREATE VIEW
, CREATE TABLE
, SELECT FROM
, DROP TABLE
, INSERT INTO
, JOIN
, GROUP BY
, CASE WHEN
, ROW NUMBER () OVER
, COALESCE
, etc.
We can verify that the task includes a Vertica SQL statement by calling the get_statement()
method:
task.get_statement()
"\nDROP TABLE IF EXISTS ACTIVITIES_6;\nCREATE TABLE ACTIVITIES_6 (\n _CASE_KEY VARCHAR(100),\n ACTIVITY_EN VARCHAR(300)\n);\nINSERT INTO ACTIVITIES_6(\n _CASE_KEY,\n ACTIVITY_EN\n)\nSELECT\nEKPO.MANDT || EKPO.EBELN || EKPO.EBELP AS _CASE_KEY,\n'Create Purchase Requisition Item' AS ACTIVITY_EN\nFROM EKPO\nJOIN EKKO ON 1=1\n AND EKPO.MANDT = EKKO.MANDT\n AND EKKO.EBELN = EKPO.EBELN\n AND EKKO.BSTYP = 'F'\n;\n"
By default, the newly-created task is enabled. This means that when the data job is triggered in the next step, the transformation will be executed. We can also disable the task and so exclude the transformation from the execution.
task.disable()
[2023-06-26 13:10:20,387] INFO: Successfully disabled task with id '54ecc8b9-810c-4bf5-8385-6e0c8d01c35f'
If we want the transformation to be executed again, we can enable it.
task.enable()
[2023-06-26 13:10:20,398] INFO: Successfully enabled task with id '54ecc8b9-810c-4bf5-8385-6e0c8d01c35f'
5. Execute the data job¶
To execute the data job simply call the execute
function which by default executes all transformations, extractions and data model reloads:
data_job.execute()
[2023-06-26 13:10:20,414] INFO: Successfully started execution for job with id 'cde6d25f-337a-4320-8565-acdbba93a99b' [2023-06-26 13:10:20,415] INFO: Wait for execution of job with id 'cde6d25f-337a-4320-8565-acdbba93a99b'
Conclusion¶
Congratulations! You have learned how to create and retrieve data jobs and transformation tasks via PyCelonis and how to specify transformation statements via Vertica SQL. You have now reached the end of the Data Integration tutorial and should be equipped with enough knowledge to get data from and upload data into the EMS with ease. In the next tutorial Studio Introduction, you will learn how to perform basic interactions with Celonis Studio.