Quickstart¶
In this tutorial you will learn how to use pycelonis_llm
, a client library that allows you to seamlessly integrate LLMs offered by Celonis with your ML Workbench code.
With pycelonis_llm
you can implement a wide variety of LLM use cases such as:
- Extracting information from free-text fields
- Summarizing documents
- Sentiment analysis
- Retrieval-augmented generation (RAG)
- Translation
- ...
You will learn in more detail how to use the following common LLM libraries with the LLMs provided by Celonis:
LiteLLM
LangChain
LlamaIndex
Prerequisites¶
- You need to install
pycelonis_llm
to execute the tutorial code. - An LLM needs to be assigned to your workbench (can be done in creation/edit dialog of your ML Workbench).
Tutorial¶
First, you need to import pycelonis_llm
which will automatically patch the supported LLM libraries to be compatible with Celonis offered LLMs:
import pycelonis_llm
1. LiteLLM¶
LiteLLM
is a Python library that simplifies the process of integrating various LLM APIs. With pycelonis_llm
you can use the LiteLLM
interface to generate basic chat completions.
To use the LLM assigned to your ML Workbench, simply call the completion
function and pass the messages
parameter to get a response from the model (pycelonis_llm
automatically sets the model
to the LLM assigned to your ML workbench).
Note: It is not possible to adjust the LLM offered by Celonis through the LiteLLM
model
parameter. Changing the LLM always needs to be performed through the ML Workbench creation/update dialog.
For more information on LiteLLM
, take a look at their documentation.
from litellm import completion
question = "Are LLMs going to replace humans soon?"
messages = [
{"role": "system", "content": "Your objective is to respond to questions in one sentence."},
{"role": "user", "content": question},
]
response = completion(messages=messages, max_tokens=100)
print(f"Question:\t{question}\nLLM Response:\t{response.choices[0].message.content}")
Question: Are LLMs going to replace humans soon? LLM Response: No, LLMs are tools designed to assist humans, not replace them.
By default, LiteLLM
debug logs are suppressed by pycelonis_llm
. You can enable them using:
import litellm
litellm.suppress_debug_info = False
2. LangChain¶
LangChain
is a framework to build use cases with LLMs by chaining interoperable components.
It has a modular design with a focus on constructing and orchestrating sequences of operations by leveraging its chains, prompts, models, memory, and agents. Some example use cases are retrieving information from documents and interacting with databases to write, analyze, and optimize queries. Furthermore, it supports:
- Agents
- Tool calling
- Chatbots
- ...
For more information on LangChain
, take a look at their documentation.
You need the langchain-community
package alongside pycelonis_llm
to use the LangChain
integration. To install it, run:
%pip install langchain-community
Then, you can use LangChain
with the the CelonisLangChain
model which automatically uses the LLM assigned to your ML Workbench.
from pycelonis_llm.integrations.langchain import CelonisLangChain
from langchain_core.messages import HumanMessage, SystemMessage
chat = CelonisLangChain()
question = "What is the answer to the ultimate question of life, the universe and everything?"
messages = [
SystemMessage(content="Your objective is to respond to questions in one sentence."),
HumanMessage(content=question),
]
response = chat.invoke(messages)
print(f"Question:\t{question}\nLLM Response:\t{response.content}")
Question: What is the answer to the ultimate question of life, the universe and everything? LLM Response: The answer to the ultimate question of life, the universe, and everything is 42.
3. LLamaIndex¶
LLamaIndex
is an open source framework to support LLM application development.
Compared to LangChain
it is more focused on retrieval-augmented generation use cases.
It allows to ingest data which can then be queried using natural language.
Furthermore, it supports:
- Question-Answering (RAG)
- Chatbots
- Agents
- ...
For more information on LlamaIndex
, take a look at their documentation.
You need the llama-index
and llama-index-llms-litellm
package alongside pycelonis_llm
to use the LlamaIndex
integration. To install it, run:
%pip install llama-index llama-index-llms-litellm
Then, you can use LlamaIndex
with the CelonisLlamaIndex
model which automatically uses the LLM assigned to your ML Workbench.
from pycelonis_llm.integrations.llama_index import CelonisLlamaIndex
from llama_index.core.llms import ChatMessage
llm = CelonisLlamaIndex()
question = "Explain Celonis to a 5-year-old."
messages = [
ChatMessage(role="system", content="Your objective is to respond to questions in one sentence."),
ChatMessage(role="user", content=question),
]
response = llm.chat(messages)
print(f"Question:\t{question}\nLLM Response:\t{response.message.content}")
Question: Explain Celonis to a 5-year-old. LLM Response: Celonis is like a super-smart detective for businesses, helping them find and fix problems to work better and faster.
Conclusion¶
Congratulations! You have learned how to utilise common LLM frameworks such as LangChain
and LlamaIndex
with the LLMs offered by Celonis.
Next, we recommend the other PyCelonis tutorials (for example Data Integration - Introduction to learn how to access your data model from the ML Workbench and query your data which combined with pycelonis_llm
can be used to build a RAG based application).