Kubernetes#
Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management. This page shows you how to set up Python environment and exeucte PyFlink jobs in a Kubernetes cluster.
Build PyFlink Image#
It requires Python 3.6 or above with PyFlink pre-installed to be available in the docker container. It’s suggested to use Python virtual environments to set up the Python environment. See Create a Python virtual environment for more details on how to prepare Python virtual environments with PyFlink installed.
You need install Python environments in the docker image with PyFlink pre-installed in advance.
To build a custom image which has Python and PyFlink prepared, you can refer to the following Dockerfile:
FROM flink:1.15.2
# install python3: it has updated Python to 3.9 in Debian 11 and so install Python 3.7 from source
# it currently only supports Python 3.6, 3.7 and 3.8 in PyFlink officially.
RUN apt-get update -y && \
apt-get install -y build-essential libssl-dev zlib1g-dev libbz2-dev libffi-dev && \
wget https://www.python.org/ftp/python/3.7.9/Python-3.7.9.tgz && \
tar -xvf Python-3.7.9.tgz && \
cd Python-3.7.9 && \
./configure --without-tests --enable-shared && \
make -j6 && \
make install && \
ldconfig /usr/local/lib && \
cd .. && rm -f Python-3.7.9.tgz && rm -rf Python-3.7.9 && \
ln -s /usr/local/bin/python3 /usr/local/bin/python && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install PyFlink
RUN pip3 install apache-flink==1.15.2
Execute PyFlink jobs in application mode with Native Kubernetes#
You could execute PyFlink jobs in application mode as following:
This is useful when there is already a Kubernetes cluster available and you want to execute each Flink job in a separate Flink cluster. Flink is responsible for talking with Kubernetes and allocating and de-allocating TaskManagers depending on the required resources.
./bin/flink run-application \
--target kubernetes-application \
--parallelism 8 \
-Dkubernetes.cluster-id=<ClusterId> \
-Dtaskmanager.memory.process.size=4096m \
-Dkubernetes.taskmanager.cpu=2 \
-Dtaskmanager.numberOfTaskSlots=4 \
-Dkubernetes.container.image=<PyFlinkImageName> \
--pyModule word_count \
--pyFiles /opt/flink/examples/python/table/word_count.py
Execute PyFlink jobs in session mode with Native Kubernetes#
You could also starting a Flink session cluster on Kubernetes and then submit PyFlink jobs to the session cluster.
The session cluster could be started as following:
./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
Then you could submit PyFlink jobs to the session cluster as following:
./bin/flink run \
--target kubernetes-session \
-Dkubernetes.cluster-id=my-first-flink-cluster \
-pyarch /path/to/venv.zip \
-pyexec venv.zip/venv/bin/python3
-py word_count.py
Note
Option -pyclientexec could be used to specify a local Python executable as the job will be compiled at the client side. Otherwise, if it’s not specified, it will use the Python environment of the current shell environment.
See Session Mode for more details about session mode of Kubernetes.
Execute PyFlink jobs with Flink Kubernetes Operator#
See PyFlink Example for more details on how to execute PyFlink jobs with Flink Kubernetes Operator.