data_pool
Module to interact with data pools.
This module contains class to interact with a data pool in EMS Data integration.
Typical usage example
DataPool ¶
Bases: DataPoolTransport
Data pool object to interact with data pool specific data integration endpoints.
from_transport
classmethod
¶
Creates high-level data pool object from given DataPoolTransport.
Parameters:
-
client
(
Client
) –Client to use to make API calls for given data pool.
-
data_pool_transport
(
DataPoolTransport
) –DataPoolTransport object containing properties of data pool.
Returns:
-
DataPool
–A DataPool object with properties from transport and given client.
update ¶
Pushes local changes of data pool to EMS and updates properties with response from EMS.
create_data_model ¶
Creates new data model with name in given data pool.
Parameters:
-
name
(
str
) –Name of new data model.
-
**kwargs
(
typing.Any
) –Additional parameters set for DataModelTransport object.
Returns:
-
DataModel
–A DataModel object for newly created data model.
get_data_model ¶
Gets data model with given id.
Parameters:
-
id_
(
str
) –Id of data model.
Returns:
-
DataModel
–A DataModel object for data model with given id.
get_data_models ¶
Gets all data models of given data pool.
Returns:
-
CelonisCollection[DataModel]
–A list containing all data models.
create_data_push_job_from
staticmethod
¶
create_data_push_job_from(
client,
data_pool_id,
target_name,
type_=None,
column_config=None,
keys=None,
**kwargs
)
Creates new data push job in given data pool.
Parameters:
-
client
(
Client
) –Client to use to make API calls for data export.
-
data_pool_id
(
str
) –Id of data pool where data push job will be created.
-
target_name
(
str
) –Table name to which job will push data.
-
type_
(
typing.Optional[JobType]
) –Type of data push job.
-
column_config
(
typing.Optional[typing.List[ColumnTransport]]
) –Can be used to specify column types and string field length in number of characters.
-
keys
(
typing.Optional[typing.List[str]]
) –Primary keys to use in case of upsert data push job.
-
**kwargs
(
typing.Any
) –Additional parameters set for DataPushJob object.
Returns:
-
DataPushJob
–The newly created DataPushJob.
create_data_push_job ¶
Creates new data push job in given data pool.
Parameters:
-
target_name
(
str
) –Table name to which job will push data.
-
type_
(
typing.Optional[JobType]
) –Type of data push job.
-
column_config
(
typing.Optional[typing.List[ColumnTransport]]
) –Can be used to specify column types and string field length in number of characters.
-
keys
(
typing.Optional[typing.List[str]]
) –Primary keys to use in case of upsert data push job.
-
**kwargs
(
typing.Any
) –Additional parameters set for DataPushJob object.
Returns:
-
DataPushJob
–A DataPushJob object for newly created data push job.
get_data_push_job ¶
Gets data push job with given id.
Parameters:
-
id_
(
str
) –Id of data push job.
Returns:
-
DataPushJob
–A DataPushJob object for data push job with given id.
get_data_push_jobs ¶
Gets all data push jobs of given data pool.
Returns:
-
CelonisCollection[DataPushJob]
–A list containing all data push jobs.
create_table ¶
create_table(
df,
table_name,
drop_if_exists=False,
column_config=None,
chunk_size=100000,
force=False,
data_source_id=None,
**kwargs
)
Creates new table in given data pool.
Parameters:
-
df
(
pd.DataFrame
) –DataFrame to push to new table.
-
table_name
(
str
) –Name of new table.
-
drop_if_exists
(
bool
) –If true, drops existing table if it exists. If false, raises PyCelonisTableAlreadyExistsError if table already exists.
-
column_config
(
typing.Optional[typing.List[ColumnTransport]]
) –Can be used to specify column types and string field length in number of characters.
-
chunk_size
(
int
) –Number of rows to push in one chunk.
-
force
(
bool
) –If true, replacing table without column config is possible. Otherwise, error is raised if table would be replaced without column config.
-
data_source_id
(
typing.Optional[str]
) –Data source id of table
-
**kwargs
(
typing.Any
) –Additional parameters set for DataPushJob object.
Returns:
-
DataPoolTable
–The new table object.
Raises:
-
PyCelonisTableAlreadyExistsError
–Raised if drop_if_exists=False and table already exists.
-
PyCelonisDataPushExecutionFailedError
–Raised when table creation fails.
-
PyCelonisValueError
–Raised when table already exists and no column config is given.
get_table ¶
Gets table located in data pool with given name and data source id.
Parameters:
-
name
(
str
) –Name of table.
-
data_source_id
(
typing.Optional[str]
) –Data source id of table
Returns:
-
DataPoolTable
–The table object by name and data source id.
Raises:
-
PyCelonisNotFoundError
–Raised if no table with name and data source id exists in given package.
get_tables ¶
Gets all data pool tables of given data pool.
Returns:
-
CelonisCollection[PoolTable]
–A list containing all data pool tables.
get_data_connection ¶
Gets data connection with given id.
Parameters:
-
id_
(
str
) –Id of data connection.
Returns:
-
DataConnection
–A DataConnection object for data connection with given id.
get_data_connections ¶
Gets all data connections of given data pool.
Returns:
-
CelonisCollection[DataConnection]
–A list containing all data connections.
create_job ¶
Creates new job with name in given data pool.
Parameters:
-
name
(
str
) –Name of new job.
-
data_source_id
(
typing.Optional[str]
) –Id of data source to use for job scope.
Returns:
-
Job
–A Job object for newly created job.
get_job ¶
Gets job with given id.
Parameters:
-
id_
(
str
) –Id of job.
Returns:
-
Job
–A Job object for job with given id.