Quickstart¶
In this tutorial you will learn how to use pycelonis_llm
, a client library that allows you to seamlessly integrate LLMs offered by Celonis with your ML Workbench code.
With pycelonis_llm
you can implement a wide variety of LLM use cases such as:
- Extracting information from free-text fields
- Summarizing documents
- Sentiment analysis
- Retrieval-augmented generation (RAG)
- Translation
- ...
You will learn in more detail how to use the following common LLM libraries with the LLMs provided by Celonis:
OpenAI
LiteLLM
LangChain
LlamaIndex
Prerequisites¶
- You need to install
pycelonis_llm
to execute the tutorial code. - An LLM needs to be assigned to your workbench (can be done in creation/edit dialog of your ML Workbench).
Tutorial¶
First, you need to import pycelonis_llm
which will automatically patch the supported LLM libraries to be compatible with Celonis offered LLMs:
import pycelonis_llm
1. OpenAI¶
OpenAI
provides a powerful framework for LLM application development, focusing on retrieval-augmented generation use cases. You can provide custom knowledge, which can then be queried using natural language. Furthermore, it supports:
- Question-Answering (RAG)
- Agents
- Chatbots
- ...
For more information on OpenAI
, take a look at their documentation.
You need the openai package alongside pycelonis_llm to use the OpenAI integration. To install it, run:
%pip install openai
Then, you can use OpenAI
with the CelonisOpenAI
model which automatically uses the LLM assigned to your ML Workbench.
from pycelonis_llm.integrations.openai import CelonisOpenAI
client = CelonisOpenAI()
question = "Explain Celonis to a 5-year-old."
completion = client.chat.completions.create(
messages=[
{"role": "system", "content": "Your objective is to respond to questions in one sentence."},
{
"role": "user",
"content": question
}
],
)
print(f"Question:\t{question}\nLLM Response:\t{completion.choices[0].message.content}")
Question: Explain Celonis to a 5-year-old. LLM Response: Celonis is like a magic helper that looks at how people do their work on a computer and finds ways to make it faster and better, just like cleaning up your room so you can find your toys more easily.
2. LiteLLM¶
LiteLLM
is a Python library that simplifies the process of integrating various LLM APIs. With pycelonis_llm
you can use the LiteLLM
interface to generate basic chat completions.
To use the LLM assigned to your ML Workbench, simply call the completion
function and pass the messages
parameter to get a response from the model (pycelonis_llm
automatically sets the model
to the LLM assigned to your ML workbench).
Note: It is not possible to adjust the LLM offered by Celonis through the LiteLLM
model
parameter. Changing the LLM always needs to be performed through the ML Workbench creation/update dialog.
For more information on LiteLLM
, take a look at their documentation.
from litellm import completion
question = "Are LLMs going to replace humans soon?"
messages = [
{"role": "system", "content": "Your objective is to respond to questions in one sentence."},
{"role": "user", "content": question},
]
response = completion(messages=messages, max_tokens=100)
print(f"Question:\t{question}\nLLM Response:\t{response.choices[0].message.content}")
Question: Are LLMs going to replace humans soon? LLM Response: No, LLMs are tools designed to assist humans but are not capable of fully replacing human roles or responsibilities.
By default, LiteLLM
debug logs are suppressed by pycelonis_llm
. You can enable them using:
import litellm
litellm.suppress_debug_info = False
3. LangChain¶
LangChain
is a framework to build use cases with LLMs by chaining interoperable components.
It has a modular design with a focus on constructing and orchestrating sequences of operations by leveraging its chains, prompts, models, memory, and agents. Some example use cases are retrieving information from documents and interacting with databases to write, analyze, and optimize queries. Furthermore, it supports:
- Agents
- Tool calling
- Chatbots
- ...
For more information on LangChain
, take a look at their documentation.
You need the langchain-community
package alongside pycelonis_llm
to use the LangChain
integration. To install it, run:
%pip install langchain-community
Then, you can use LangChain
with the the CelonisLangChain
model which automatically uses the LLM assigned to your ML Workbench.
from pycelonis_llm.integrations.langchain import CelonisLangChain
from langchain_core.messages import HumanMessage, SystemMessage
chat = CelonisLangChain()
question = "What is the answer to the ultimate question of life, the universe and everything?"
messages = [
SystemMessage(content="Your objective is to respond to questions in one sentence."),
HumanMessage(content=question),
]
response = chat.invoke(messages)
print(f"Question:\t{question}\nLLM Response:\t{response.content}")
Question: What is the answer to the ultimate question of life, the universe and everything? LLM Response: According to Douglas Adams' "The Hitchhiker's Guide to the Galaxy," the answer is 42.
4. LLamaIndex¶
LLamaIndex
is an open source framework to support LLM application development.
Compared to LangChain
it is more focused on retrieval-augmented generation use cases.
It allows to ingest data which can then be queried using natural language.
Furthermore, it supports:
- Question-Answering (RAG)
- Chatbots
- Agents
- ...
For more information on LlamaIndex
, take a look at their documentation.
You need the llama-index
and llama-index-llms-litellm
package alongside pycelonis_llm
to use the LlamaIndex
integration. To install it, run:
%pip install llama-index llama-index-llms-litellm
Then, you can use LlamaIndex
with the CelonisLlamaIndex
model which automatically uses the LLM assigned to your ML Workbench.
from pycelonis_llm.integrations.llama_index import CelonisLlamaIndex
from llama_index.core.llms import ChatMessage
llm = CelonisLlamaIndex()
question = "Explain Celonis to a 5-year-old."
messages = [
ChatMessage(role="system", content="Your objective is to respond to questions in one sentence."),
ChatMessage(role="user", content=question),
]
response = llm.chat(messages)
print(f"Question:\t{question}\nLLM Response:\t{response.message.content}")
Question: Explain Celonis to a 5-year-old. LLM Response: Celonis is like a magic helper for businesses that looks at how they do things and finds ways to make them faster and better.
Conclusion¶
Congratulations! You have learned how to utilise common LLM frameworks such as LangChain
and LlamaIndex
with the LLMs offered by Celonis.
Next, we recommend the other PyCelonis tutorials (for example Data Integration - Introduction to learn how to access your data model from the ML Workbench and query your data which combined with pycelonis_llm
can be used to build a RAG based application).