diff --git a/examples/rbac-remote/README.md b/examples/rbac-remote/README.md new file mode 100644 index 0000000000..118800db55 --- /dev/null +++ b/examples/rbac-remote/README.md @@ -0,0 +1,171 @@ +# Feast Deployment with RBAC + +## Demo Summary +This demo showcases how to enable Role-Based Access Control (RBAC) for Feast using Kubernetes or [OIDC](https://openid.net/developers/how-connect-works/) Authentication type. +The demo steps involve deploying server components (registry, offline, online) and client examples within a Kubernetes environment. +The goal is to ensure secure access control based on user roles and permissions. For understanding the Feast RBAC framework +Please read these reference documents. +- [RBAC Architecture](https://docs.feast.dev/v/master/getting-started/architecture/rbac) +- [RBAC Permission](https://docs.feast.dev/v/master/getting-started/concepts/permission). +- [RBAC Authorization Manager](https://docs.feast.dev/v/master/getting-started/components/authz_manager) + +## Tools and Projects +- Kubernetes +- Feast +- PostgreSQL Database +- [Keycloak](https://www.keycloak.org) (if OIDC) + +## Application Environment + +This demo contains the following components: + +1. Feast Remote Server components (online, offline, registry). +2. Feast Remote Client RBAC example. +3. Yaml Configuration and installation related scripts files. + +![demo.jpg](demo.jpg) + +## Setup Instructions + +The application works with Kubernetes or OpenShift and the instructions assume that you are using a Kubernetes or OpenShift cluster. + +### Prerequisites + +1. Kubernetes Cluster and Kubernetes CLI (kubectl). +2. Helm: Ensure you have Helm installed for deploying the Feast components. +3. Python environment. +4. Feast CLI latest version. + +## 1. Prerequisites Step + + - **Step 1 : Create the Feast project with PostgreSQL.** + + * Install the PostgreSQL on a Kubernetes cluster if you are using OpenShift you can install using [OpenShift Template](https://github.com/RHEcosystemAppEng/feast-workshop-team-share/tree/main/feast_postgres#1-install-postgresql-on-openshift-using-openshift-template) + * Port Forward the PostgreSQL Database to your local machine. Since we are setting up the Feast project locally using the Feast CLI, we need to port forward PostgreSQL: + ``` kubectl port-forward svc/postgresql 5432:5432``` + * Create a feature repository/project using the cli with PostgreSQL. Please see the instructions for more details [here](https://docs.feast.dev/reference/offline-stores/postgres#getting-started). + For this (local) example setup, we create a project with name server using these settings for the [feature_store.yaml](server/feature_repo/feature_store.yaml). + +## 2. Authorization Setup + +### A. Kubernetes Authorization +- **Step 1: Create Remote configuration Files** + - Set the auth type to `kubernetes` in the respective `feature_store` files + + ```yaml + auth: + type: kubernetes + ``` + - For each server, feature store YAML files can be created for example like below: + + **Registry Server:** [feature_store_registry.yaml](server/k8s/feature_store_registry.yaml) + + **Offline Server :** [feature_store_offline.yaml](server/k8s/feature_store_offline.yaml) + + **Online Server :** [feature_store_online.yaml](server/k8s/feature_store_online.yaml) + +- **Step 2: Deploy the Server Components** + - Run the installation script. The setup script will deploy the server components based on the user's confirmation, enter `k8s` for kubernetes authentication deployment. The script will deploy all the components with the namespace `feast-dev`. + + ```sh + ./install_feast.sh + ``` + +### B. OIDC Authorization +- **Step 1: Setup Keycloak** + - See the documentation [here](https://www.keycloak.org/getting-started/getting-started-kube) and install Keycloak. + - Create a new realm with the name `feast-rbac` from the admin console. + - Under the `feast-rbac` realm, create a new client with the name `feast-client` + - Generate the secret for the `feast-client`. +- **Step 2: Create the Server Feature Store Files** + - Set the auth type to `oidc` in the respective `feature_store` files + + ```yaml + auth: + type: oidc + client_id: _CLIENT_ID__ + auth_discovery_url: _OIDC_SERVER_URL_/realms/feast-rbac/.well-known/openid-configuration + ``` + - For each server the feature store YAML files can be created for example like below: + + **Registry Server:** [feature_store_registry.yaml](server/oidc/feature_store_registry.yaml) + + **Offline Server :** [feature_store_offline.yaml](server/oidc/feature_store_offline.yaml) + + **Online Server :** [feature_store_online.yaml](server/oidc/feature_store_online.yaml) + +- **Step 3: Deploy the Server Components** + - Run the installation script. Enter `oidc` for the Keycloak authentication deployment. The script will deploy all of the components with the namespace `feast-dev`. + + ```sh + ./install_feast.sh + ``` + +## 3. Client Setup + +### A. Kubernetes Authorization +- **Step 1: Create the Client Feature Store YAML** + - Set up the client feature store with remote connection details for the registry, online, and offline store with auth type `kuberentes` . See the client remote setting example here: [feature_store.yaml](client/k8s/feature_repo/feature_store.yaml) +- **Step 2: Deploy the Client Examples** + - As an example, we created 3 different users: 1. [admin_user](client/k8s/admin_user_resources.yaml), 2. [readonly_user](client/k8s/readonly_user_resources.yaml) and 3. [unauthorized_user](client/k8s/unauthorized_user_resources.yaml) . + - Each user is assigned their own service account and roles, as shown in the table below. + ##### Roles and Permissions for Examples (Admin and User) + | **User** | **Service Account** | **Roles** | **Permission** | **Feast Resources** | **Actions** | + |-----------------|----------------------------|------------------|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------| + | admin | feast-admin-sa | feast-admin-role | feast_admin_permission | FeatureView, OnDemandFeatureView, BatchFeatureView, StreamFeatureView, Entity, FeatureService, DataSource, ValidationReference, SavedDataset, Permission | CREATE, DESCRIBE, UPDATE, DELETE, READ_ONLINE, READY_OFFLINE, WRITE_ONLINE, WRITE_OFFLINE | + | user | feast-user-sa | feast-user-role | feast_user_permission | FeatureView, OnDemandFeatureView, BatchFeatureView, StreamFeatureView, Entity, FeatureService, DataSource, ValidationReference, SavedDataset, Permission | READ, READ_OFFLINE, READ_ONLINE | + |unauthorized-user| feast-unauthorized-user-sa | | + - To deploy the client confirm `Apply client creation examples` `Y` + - The Deployment of the overall setup looks like : + + ![Deployment.png](deployment.png) + +### B. OIDC Authorization +- **Step 1: Create the Client Feature Store YAML** + - Set up the client feature store with the remote connection details for the registry, online, and offline store. + - Set the `Auth type` to `oidc` + - update the client secret in client side `feature_store.yaml` or if required any other settings as show below. + ``` + auth_discovery_url: https://keycloak-feast-dev.apps.com/realms/feast-rbac/.well-known/openid-configuration + client_id: feast-client + client_secret: update-this-value + username: ${FEAST_USERNAME} + password: ${FEAST_PASSWORD} + ``` + - See the client remote setting example here: [feature_store.yaml](client/oidc/feature_repo/feature_store.yaml) +- **Step 2: Create the Roles and Users** + - Under the `feast-client` create the two roles `feast-admin-role` and `feast-user-role` + - Under the `feast-rbac` realm, create 3 different users: `admin-user`, `readonly-user`, and `unauthorized-user`. Assign the password `feast` to each user. + - Map the roles to users: select the `admin-user`, go to `Role mapping`, and assign the `feast-admin-role`. Select the `readonly-user` and assign the `feast-user-role`. For the `unauthorized-user`, do not assign any roles. +- **Step 3: Deploy the Client Examples** + - For OIDC, similar to the k8s examples, create different deployments and add the username and password as environment variables: 1. [admin_user](client/oidc/admin_user_resources.yaml), 2. [readonly_user](client/oidc/readonly_user_resources.yaml) and 3. [unauthorized_user](client/oidc/unauthorized_user_resources.yaml) . + - To deploy the client confirm `Apply client creation examples` `Y` + +## 4. Permissions Management +- **Step 1: Apply the Permissions** + - See the code example in [permissions_apply.py](server/feature_repo/permissions_apply.py) for applying the permissions for both Kubernetes and OIDC setup. + - The `install_feast.sh` has the option to apply permission from the pod with the user's confirmation `Do you want to copy files and execute 'feast apply in the pod? (y/n)`. +- **Step 2: Validate the Permissions** + - use the Feast cli to validate the permissions with the command `feast permissions list` for more details use `feast permissions list -v`. Additionally, there are other commands such as: + `feast permissions check / describe / list-roles` +## 5. Validating the Permissions/RBAC results +- **Run the Examples** + - As outlined in the [test.py](client/k8s/feature_repo/test.py) script, the example attempts to fetch Historical Features, perform Materialization, fetch Online Features, and push to the online/offline store based on user roles. + - The `admin-user` can perform all actions on all objects. + - The `readonly-user` can only read or query all objects. + - `unauthorized user` should not able to read or write any resources as no role is defined for this user. + - From each user's pod run the example `python feature_repo/test.py` + +## 6. Local Testing and Cleanup +- **Local Testing** + - For local testing, port forward the services PostgreSQL Service and Feast Servers with the commands below: + ``` + kubectl port-forward svc/postgresql 5432:5432 + kubectl port-forward svc/feast-offline-server-feast-feature-server 8815:80 + kubectl port-forward svc/feast-registry-server-feast-feature-server 6570:80 + kubectl port-forward svc/feast-feature-server 6566:80 + ``` + - When testing in Kubernetes, users can set the environment variable `LOCAL_K8S_TOKEN` in each example. The token can be obtained from the service account. +- **Cleanup** + - Run the command + - ```./cleanup_feast.sh``` \ No newline at end of file diff --git a/examples/rbac-remote/cleanup_feast.sh b/examples/rbac-remote/cleanup_feast.sh new file mode 100755 index 0000000000..18acf6727c --- /dev/null +++ b/examples/rbac-remote/cleanup_feast.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +DEFAULT_HELM_RELEASES=("feast-feature-server" "feast-offline-server" "feast-registry-server") +NAMESPACE="feast-dev" + +HELM_RELEASES=(${1:-${DEFAULT_HELM_RELEASES[@]}}) +NAMESPACE=${2:-$NAMESPACE} + +echo "Deleting Helm releases..." +for release in "${HELM_RELEASES[@]}"; do + helm uninstall $release -n $NAMESPACE +done + +echo "Deleting Kubernetes roles, role bindings, and service accounts for clients" +kubectl delete -f client/k8s/admin_user_resources.yaml +kubectl delete -f client/k8s/readonly_user_resources.yaml +kubectl delete -f client/k8s/unauthorized_user_resources.yaml +kubectl delete -f client/oidc/admin_user_resources.yaml +kubectl delete -f client/oidc/readonly_user_resources.yaml +kubectl delete -f client/oidc/unauthorized_user_resources.yaml +kubectl delete -f server/k8s/server_resources.yaml +kubectl delete configmap client-feature-repo-config + +echo "Cleanup completed." diff --git a/examples/rbac-remote/client/k8s/admin_user_resources.yaml b/examples/rbac-remote/client/k8s/admin_user_resources.yaml new file mode 100644 index 0000000000..d5df8bcbf2 --- /dev/null +++ b/examples/rbac-remote/client/k8s/admin_user_resources.yaml @@ -0,0 +1,56 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: feast-admin-sa + namespace: feast-dev +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: feast-admin-role + namespace: feast-dev +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: feast-admin-rolebinding + namespace: feast-dev +subjects: + - kind: ServiceAccount + name: feast-admin-sa + namespace: feast-dev +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: feast-admin-role +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-admin-user + namespace: feast-dev + labels: + app: client-admin +spec: + replicas: 1 + selector: + matchLabels: + app: client-admin + template: + metadata: + labels: + app: client-admin + spec: + serviceAccountName: feast-admin-sa + containers: + - name: client-admin-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config diff --git a/examples/rbac-remote/client/k8s/feature_repo/feature_store.yaml b/examples/rbac-remote/client/k8s/feature_repo/feature_store.yaml new file mode 100644 index 0000000000..d316005098 --- /dev/null +++ b/examples/rbac-remote/client/k8s/feature_repo/feature_store.yaml @@ -0,0 +1,14 @@ +project: server +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +offline_store: + type: remote + host: feast-offline-server-feast-feature-server.feast-dev.svc.cluster.local + port: 80 +online_store: + type: remote + path: http://feast-feature-server.feast-dev.svc.cluster.local:80 +auth: + type: kubernetes + diff --git a/examples/rbac-remote/client/k8s/feature_repo/test.py b/examples/rbac-remote/client/k8s/feature_repo/test.py new file mode 100644 index 0000000000..6e1480bc94 --- /dev/null +++ b/examples/rbac-remote/client/k8s/feature_repo/test.py @@ -0,0 +1,140 @@ +import os +from datetime import datetime + +import pandas as pd +from feast import FeatureStore +from feast.data_source import PushMode + + +def run_demo(): + try: + os.environ["LOCAL_K8S_TOKEN"] = "" + + store = FeatureStore(repo_path="/feature_repo") + + print("\n--- Historical features for training ---") + fetch_historical_features_entity_df(store, for_batch_scoring=False) + + print("\n--- Historical features for batch scoring ---") + fetch_historical_features_entity_df(store, for_batch_scoring=True) + + try: + print("\n--- Load features into online store/materialize_incremental ---") + feature_views= store.list_feature_views() + if not feature_views: + raise PermissionError("No access to feature-views or no feature-views available.") + store.materialize_incremental(end_date=datetime.now()) + except PermissionError as pe: + print(f"Permission error: {pe}") + except Exception as e: + print(f"An occurred while performing materialize incremental: {e}") + + print("\n--- Online features ---") + fetch_online_features(store) + + print("\n--- Online features retrieved (instead) through a feature service---") + fetch_online_features(store, source="feature_service") + + print( + "\n--- Online features retrieved (using feature service v3, which uses a feature view with a push source---" + ) + fetch_online_features(store, source="push") + + print("\n--- Simulate a stream event ingestion of the hourly stats df ---") + event_df = pd.DataFrame.from_dict( + { + "driver_id": [1001], + "event_timestamp": [datetime.now()], + "created": [datetime.now()], + "conv_rate": [1.0], + "acc_rate": [1.0], + "avg_daily_trips": [1000], + } + ) + store.push("driver_stats_push_source", event_df, to=PushMode.ONLINE_AND_OFFLINE) + + print("\n--- Online features again with updated values from a stream push---") + fetch_online_features(store, source="push") + + except Exception as e: + print(f"An error occurred: {e}") + + +def fetch_historical_features_entity_df(store: FeatureStore, for_batch_scoring: bool): + try: + entity_df = pd.DataFrame.from_dict( + { + "driver_id": [1001, 1002, 1003], + "event_timestamp": [ + datetime(2021, 4, 12, 10, 59, 42), + datetime(2021, 4, 12, 8, 12, 10), + datetime(2021, 4, 12, 16, 40, 26), + ], + "label_driver_reported_satisfaction": [1, 5, 3], + # values we're using for an on-demand transformation + "val_to_add": [1, 2, 3], + "val_to_add_2": [10, 20, 30], + + } + + ) + if for_batch_scoring: + entity_df["event_timestamp"] = pd.to_datetime("now", utc=True) + + training_df = store.get_historical_features( + entity_df=entity_df, + features=[ + "driver_hourly_stats:conv_rate", + "driver_hourly_stats:acc_rate", + "driver_hourly_stats:avg_daily_trips", + "transformed_conv_rate:conv_rate_plus_val1", + "transformed_conv_rate:conv_rate_plus_val2", + ], + ).to_df() + print(training_df.head()) + + except Exception as e: + print(f"An error occurred while fetching historical features: {e}") + + +def fetch_online_features(store, source: str = ""): + try: + entity_rows = [ + # {join_key: entity_value} + { + "driver_id": 1001, + "val_to_add": 1000, + "val_to_add_2": 2000, + }, + { + "driver_id": 1002, + "val_to_add": 1001, + "val_to_add_2": 2002, + }, + ] + if source == "feature_service": + features_to_fetch = store.get_feature_service("driver_activity_v1") + elif source == "push": + features_to_fetch = store.get_feature_service("driver_activity_v3") + else: + features_to_fetch = [ + "driver_hourly_stats:acc_rate", + "transformed_conv_rate:conv_rate_plus_val1", + "transformed_conv_rate:conv_rate_plus_val2", + ] + returned_features = store.get_online_features( + features=features_to_fetch, + entity_rows=entity_rows, + ).to_dict() + for key, value in sorted(returned_features.items()): + print(key, " : ", value) + + except Exception as e: + print(f"An error occurred while fetching online features: {e}") + + +if __name__ == "__main__": + try: + run_demo() + except Exception as e: + print(f"An error occurred in the main execution: {e}") diff --git a/examples/rbac-remote/client/k8s/readonly_user_resources.yaml b/examples/rbac-remote/client/k8s/readonly_user_resources.yaml new file mode 100644 index 0000000000..c9094e7f2f --- /dev/null +++ b/examples/rbac-remote/client/k8s/readonly_user_resources.yaml @@ -0,0 +1,57 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: feast-user-sa + namespace: feast-dev +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: feast-user-role + namespace: feast-dev +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: feast-user-rolebinding + namespace: feast-dev +subjects: + - kind: ServiceAccount + name: feast-user-sa + namespace: feast-dev +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: feast-user-role +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-readonly-user + namespace: feast-dev + labels: + app: client-user +spec: + replicas: 1 + selector: + matchLabels: + app: client-user + template: + metadata: + labels: + app: client-user + spec: + serviceAccountName: feast-user-sa + containers: + - name: client-user-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config + diff --git a/examples/rbac-remote/client/k8s/unauthorized_user_resources.yaml b/examples/rbac-remote/client/k8s/unauthorized_user_resources.yaml new file mode 100644 index 0000000000..5068c94fd9 --- /dev/null +++ b/examples/rbac-remote/client/k8s/unauthorized_user_resources.yaml @@ -0,0 +1,36 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: feast-unauthorized-user-sa + namespace: feast-dev +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-unauthorized-user + namespace: feast-dev + labels: + app: client-unauthorized-user +spec: + replicas: 1 + selector: + matchLabels: + app: client-unauthorized-user + template: + metadata: + labels: + app: client-unauthorized-user + spec: + serviceAccountName: feast-unauthorized-user-sa + containers: + - name: client-unauthorized-user-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config diff --git a/examples/rbac-remote/client/oidc/admin_user_resources.yaml b/examples/rbac-remote/client/oidc/admin_user_resources.yaml new file mode 100644 index 0000000000..7843ce3c9d --- /dev/null +++ b/examples/rbac-remote/client/oidc/admin_user_resources.yaml @@ -0,0 +1,34 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-admin-user + namespace: feast-dev + labels: + app: client-admin +spec: + replicas: 1 + selector: + matchLabels: + app: client-admin + template: + metadata: + labels: + app: client-admin + spec: + containers: + - name: client-admin-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + env: + - name: FEAST_USERNAME + value: admin-user + - name: FEAST_PASSWORD + value: feast + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config diff --git a/examples/rbac-remote/client/oidc/feature_repo/feature_store.yaml b/examples/rbac-remote/client/oidc/feature_repo/feature_store.yaml new file mode 100644 index 0000000000..1454e16df9 --- /dev/null +++ b/examples/rbac-remote/client/oidc/feature_repo/feature_store.yaml @@ -0,0 +1,19 @@ +project: server +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +offline_store: + type: remote + host: feast-offline-server-feast-feature-server.feast-dev.svc.cluster.local + port: 80 +online_store: + type: remote + path: http://feast-feature-server.feast-dev.svc.cluster.local:80 +auth: + type: oidc + auth_discovery_url: https://keycloak-feast-dev.apps.com/realms/feast-rbac/.well-known/openid-configuration + client_id: feast-client + client_secret: update-this-value + username: ${FEAST_USERNAME} + password: ${FEAST_PASSWORD} +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/client/oidc/feature_repo/test.py b/examples/rbac-remote/client/oidc/feature_repo/test.py new file mode 100644 index 0000000000..6e1480bc94 --- /dev/null +++ b/examples/rbac-remote/client/oidc/feature_repo/test.py @@ -0,0 +1,140 @@ +import os +from datetime import datetime + +import pandas as pd +from feast import FeatureStore +from feast.data_source import PushMode + + +def run_demo(): + try: + os.environ["LOCAL_K8S_TOKEN"] = "" + + store = FeatureStore(repo_path="/feature_repo") + + print("\n--- Historical features for training ---") + fetch_historical_features_entity_df(store, for_batch_scoring=False) + + print("\n--- Historical features for batch scoring ---") + fetch_historical_features_entity_df(store, for_batch_scoring=True) + + try: + print("\n--- Load features into online store/materialize_incremental ---") + feature_views= store.list_feature_views() + if not feature_views: + raise PermissionError("No access to feature-views or no feature-views available.") + store.materialize_incremental(end_date=datetime.now()) + except PermissionError as pe: + print(f"Permission error: {pe}") + except Exception as e: + print(f"An occurred while performing materialize incremental: {e}") + + print("\n--- Online features ---") + fetch_online_features(store) + + print("\n--- Online features retrieved (instead) through a feature service---") + fetch_online_features(store, source="feature_service") + + print( + "\n--- Online features retrieved (using feature service v3, which uses a feature view with a push source---" + ) + fetch_online_features(store, source="push") + + print("\n--- Simulate a stream event ingestion of the hourly stats df ---") + event_df = pd.DataFrame.from_dict( + { + "driver_id": [1001], + "event_timestamp": [datetime.now()], + "created": [datetime.now()], + "conv_rate": [1.0], + "acc_rate": [1.0], + "avg_daily_trips": [1000], + } + ) + store.push("driver_stats_push_source", event_df, to=PushMode.ONLINE_AND_OFFLINE) + + print("\n--- Online features again with updated values from a stream push---") + fetch_online_features(store, source="push") + + except Exception as e: + print(f"An error occurred: {e}") + + +def fetch_historical_features_entity_df(store: FeatureStore, for_batch_scoring: bool): + try: + entity_df = pd.DataFrame.from_dict( + { + "driver_id": [1001, 1002, 1003], + "event_timestamp": [ + datetime(2021, 4, 12, 10, 59, 42), + datetime(2021, 4, 12, 8, 12, 10), + datetime(2021, 4, 12, 16, 40, 26), + ], + "label_driver_reported_satisfaction": [1, 5, 3], + # values we're using for an on-demand transformation + "val_to_add": [1, 2, 3], + "val_to_add_2": [10, 20, 30], + + } + + ) + if for_batch_scoring: + entity_df["event_timestamp"] = pd.to_datetime("now", utc=True) + + training_df = store.get_historical_features( + entity_df=entity_df, + features=[ + "driver_hourly_stats:conv_rate", + "driver_hourly_stats:acc_rate", + "driver_hourly_stats:avg_daily_trips", + "transformed_conv_rate:conv_rate_plus_val1", + "transformed_conv_rate:conv_rate_plus_val2", + ], + ).to_df() + print(training_df.head()) + + except Exception as e: + print(f"An error occurred while fetching historical features: {e}") + + +def fetch_online_features(store, source: str = ""): + try: + entity_rows = [ + # {join_key: entity_value} + { + "driver_id": 1001, + "val_to_add": 1000, + "val_to_add_2": 2000, + }, + { + "driver_id": 1002, + "val_to_add": 1001, + "val_to_add_2": 2002, + }, + ] + if source == "feature_service": + features_to_fetch = store.get_feature_service("driver_activity_v1") + elif source == "push": + features_to_fetch = store.get_feature_service("driver_activity_v3") + else: + features_to_fetch = [ + "driver_hourly_stats:acc_rate", + "transformed_conv_rate:conv_rate_plus_val1", + "transformed_conv_rate:conv_rate_plus_val2", + ] + returned_features = store.get_online_features( + features=features_to_fetch, + entity_rows=entity_rows, + ).to_dict() + for key, value in sorted(returned_features.items()): + print(key, " : ", value) + + except Exception as e: + print(f"An error occurred while fetching online features: {e}") + + +if __name__ == "__main__": + try: + run_demo() + except Exception as e: + print(f"An error occurred in the main execution: {e}") diff --git a/examples/rbac-remote/client/oidc/readonly_user_resources.yaml b/examples/rbac-remote/client/oidc/readonly_user_resources.yaml new file mode 100644 index 0000000000..c43137bfba --- /dev/null +++ b/examples/rbac-remote/client/oidc/readonly_user_resources.yaml @@ -0,0 +1,34 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-readonly-user + namespace: feast-dev + labels: + app: client-user +spec: + replicas: 1 + selector: + matchLabels: + app: client-user + template: + metadata: + labels: + app: client-user + spec: + containers: + - name: client-admin-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + env: + - name: FEAST_USERNAME + value: readonly-user + - name: FEAST_PASSWORD + value: feast + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config diff --git a/examples/rbac-remote/client/oidc/unauthorized_user_resources.yaml b/examples/rbac-remote/client/oidc/unauthorized_user_resources.yaml new file mode 100644 index 0000000000..f99bb3e987 --- /dev/null +++ b/examples/rbac-remote/client/oidc/unauthorized_user_resources.yaml @@ -0,0 +1,35 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: client-unauthorized-user + namespace: feast-dev + labels: + app: client-unauthorized-user +spec: + replicas: 1 + selector: + matchLabels: + app: client-unauthorized-user + template: + metadata: + labels: + app: client-unauthorized-user + spec: + containers: + - name: client-admin-container + image: feastdev/feature-server:latest + imagePullPolicy: Always + command: ["sleep", "infinity"] + env: + - name: FEAST_USERNAME + value: unauthorized-user + - name: FEAST_PASSWORD + value: feast + volumeMounts: + - name: client-feature-repo-config + mountPath: /feature_repo + volumes: + - name: client-feature-repo-config + configMap: + name: client-feature-repo-config + diff --git a/examples/rbac-remote/demo.jpg b/examples/rbac-remote/demo.jpg new file mode 100644 index 0000000000..718e49dde6 Binary files /dev/null and b/examples/rbac-remote/demo.jpg differ diff --git a/examples/rbac-remote/deployment.png b/examples/rbac-remote/deployment.png new file mode 100644 index 0000000000..9b9a0d7b2a Binary files /dev/null and b/examples/rbac-remote/deployment.png differ diff --git a/examples/rbac-remote/install_feast.sh b/examples/rbac-remote/install_feast.sh new file mode 100755 index 0000000000..b87d44b335 --- /dev/null +++ b/examples/rbac-remote/install_feast.sh @@ -0,0 +1,109 @@ +#!/bin/bash + +# Specify the RBAC type (folder) +read -p "Enter RBAC type (e.g., k8s or oidc): " FOLDER + +echo "You have selected the RBAC type: $FOLDER" + +# feature_store files name for the servers +OFFLINE_YAML="feature_store_offline.yaml" +ONLINE_YAML="feature_store_online.yaml" +REGISTRY_YAML="feature_store_registry.yaml" + +# Helm chart path and service account +HELM_CHART_PATH="../../infra/charts/feast-feature-server" +SERVICE_ACCOUNT_NAME="feast-sa" +CLIENT_REPO_DIR="client/$FOLDER/feature_repo" + +# Function to check if a file exists and encode it to base64 +encode_to_base64() { + local file_path=$1 + if [ ! -f "$file_path" ]; then + echo "Error: File not found at $file_path" + exit 1 + fi + base64 < "$file_path" +} + +FEATURE_STORE_OFFLINE_YAML_PATH="server/$FOLDER/$OFFLINE_YAML" +FEATURE_STORE_ONLINE_YAML_PATH="server/$FOLDER/$ONLINE_YAML" +FEATURE_STORE_REGISTRY_YAML_PATH="server/$FOLDER/$REGISTRY_YAML" + +# Encode the YAML files to base64 +FEATURE_STORE_OFFLINE_YAML_BASE64=$(encode_to_base64 "$FEATURE_STORE_OFFLINE_YAML_PATH") +FEATURE_STORE_ONLINE_YAML_BASE64=$(encode_to_base64 "$FEATURE_STORE_ONLINE_YAML_PATH") +FEATURE_STORE_REGISTRY_YAML_BASE64=$(encode_to_base64 "$FEATURE_STORE_REGISTRY_YAML_PATH") + +# Check if base64 encoding was successful +if [ -z "$FEATURE_STORE_OFFLINE_YAML_BASE64" ] || [ -z "$FEATURE_STORE_ONLINE_YAML_BASE64" ] || [ -z "$FEATURE_STORE_REGISTRY_YAML_BASE64" ]; then + echo "Error: Failed to base64 encode one or more feature_store.yaml files in folder $FOLDER." + exit 1 +fi + +# Upgrade or install Feast components for the specified folder +read -p "Deploy Feast server components for $FOLDER? (y/n) " confirm_server +if [[ $confirm_server == [yY] ]]; then + # Apply the server service accounts and role bindings + kubectl apply -f "server/k8s/server_resources.yaml" + + # Upgrade or install Feast components + echo "Upgrading or installing Feast server components for $FOLDER" + + helm upgrade --install feast-registry-server $HELM_CHART_PATH \ + --set feast_mode=registry \ + --set feature_store_yaml_base64=$FEATURE_STORE_REGISTRY_YAML_BASE64 \ + --set serviceAccount.name=$SERVICE_ACCOUNT_NAME + + helm upgrade --install feast-feature-server $HELM_CHART_PATH \ + --set feature_store_yaml_base64=$FEATURE_STORE_ONLINE_YAML_BASE64 \ + --set serviceAccount.name=$SERVICE_ACCOUNT_NAME + + helm upgrade --install feast-offline-server $HELM_CHART_PATH \ + --set feast_mode=offline \ + --set feature_store_yaml_base64=$FEATURE_STORE_OFFLINE_YAML_BASE64 \ + --set serviceAccount.name=$SERVICE_ACCOUNT_NAME + + echo "Server components deployed for $FOLDER." +else + echo "Server components not deployed for $FOLDER." +fi + +read -p "Apply client creation examples ? (y/n) " confirm_clients +if [[ $confirm_clients == [yY] ]]; then + kubectl delete configmap client-feature-repo-config --ignore-not-found + kubectl create configmap client-feature-repo-config --from-file=$CLIENT_REPO_DIR + + kubectl apply -f "client/$FOLDER/admin_user_resources.yaml" + kubectl apply -f "client/$FOLDER/readonly_user_resources.yaml" + kubectl apply -f "client/$FOLDER/unauthorized_user_resources.yaml" + + echo "Client resources applied." +else + echo "Client resources not applied." +fi + +read -p "Apply 'feast apply' in the remote registry? (y/n) " confirm_apply +if [[ $confirm_apply == [yY] ]]; then + + POD_NAME=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep '^feast-registry-server-feast-feature-server') + + if [ -z "$POD_NAME" ]; then + echo "No pod found with the prefix feast-registry-server-feast-feature-server" + exit 1 + fi + + LOCAL_DIR="./server/feature_repo/" + REMOTE_DIR="/app/" + + echo "Copying files from $LOCAL_DIR to $POD_NAME:$REMOTE_DIR" + kubectl cp $LOCAL_DIR $POD_NAME:$REMOTE_DIR + + echo "Files copied successfully!" + + kubectl exec $POD_NAME -- feast -c feature_repo apply + echo "'feast apply' command executed successfully in the for remote registry." +else + echo "'feast apply' not performed ." +fi + +echo "Setup completed." diff --git a/examples/rbac-remote/server/feature_repo/example_repo.py b/examples/rbac-remote/server/feature_repo/example_repo.py new file mode 100644 index 0000000000..5b8105bb94 --- /dev/null +++ b/examples/rbac-remote/server/feature_repo/example_repo.py @@ -0,0 +1,130 @@ +# This is an example feature definition file + +from datetime import timedelta + +import pandas as pd + +from feast import Entity, FeatureService, FeatureView, Field, PushSource, RequestSource +from feast.infra.offline_stores.contrib.postgres_offline_store.postgres_source import PostgreSQLSource + +from feast.on_demand_feature_view import on_demand_feature_view +from feast.types import Float32, Float64, Int64 + +# Define an entity for the driver. You can think of an entity as a primary key used to +# fetch features. +driver = Entity(name="driver", join_keys=["driver_id"]) + +driver_stats_source = PostgreSQLSource( + name="driver_hourly_stats_source", + query="SELECT * FROM feast_driver_hourly_stats", + timestamp_field="event_timestamp", + created_timestamp_column="created", +) + +# Our parquet files contain sample data that includes a driver_id column, timestamps and +# three feature column. Here we define a Feature View that will allow us to serve this +# data to our model online. +driver_stats_fv = FeatureView( + # The unique name of this feature view. Two feature views in a single + # project cannot have the same name + name="driver_hourly_stats", + entities=[driver], + ttl=timedelta(days=1), + # The list of features defined below act as a schema to both define features + # for both materialization of features into a store, and are used as references + # during retrieval for building a training dataset or serving features + schema=[ + Field(name="conv_rate", dtype=Float32), + Field(name="acc_rate", dtype=Float32), + Field(name="avg_daily_trips", dtype=Int64), + ], + online=True, + source=driver_stats_source, + # Tags are user defined key/value pairs that are attached to each + # feature view + tags={"team": "driver_performance"}, +) + +# Define a request data source which encodes features / information only +# available at request time (e.g. part of the user initiated HTTP request) +input_request = RequestSource( + name="vals_to_add", + schema=[ + Field(name="val_to_add", dtype=Int64), + Field(name="val_to_add_2", dtype=Int64), + ], +) + + +# Define an on demand feature view which can generate new features based on +# existing feature views and RequestSource features +@on_demand_feature_view( + sources=[driver_stats_fv, input_request], + schema=[ + Field(name="conv_rate_plus_val1", dtype=Float64), + Field(name="conv_rate_plus_val2", dtype=Float64), + ], +) +def transformed_conv_rate(inputs: pd.DataFrame) -> pd.DataFrame: + df = pd.DataFrame() + df["conv_rate_plus_val1"] = inputs["conv_rate"] + inputs["val_to_add"] + df["conv_rate_plus_val2"] = inputs["conv_rate"] + inputs["val_to_add_2"] + return df + + +# This groups features into a model version +driver_activity_v1 = FeatureService( + name="driver_activity_v1", + features=[ + driver_stats_fv[["conv_rate"]], # Sub-selects a feature from a feature view + transformed_conv_rate, # Selects all features from the feature view + ], +) +driver_activity_v2 = FeatureService( + name="driver_activity_v2", features=[driver_stats_fv, transformed_conv_rate] +) + +# Defines a way to push data (to be available offline, online or both) into Feast. +driver_stats_push_source = PushSource( + name="driver_stats_push_source", + batch_source=driver_stats_source, +) + +# Defines a slightly modified version of the feature view from above, where the source +# has been changed to the push source. This allows fresh features to be directly pushed +# to the online store for this feature view. +driver_stats_fresh_fv = FeatureView( + name="driver_hourly_stats_fresh", + entities=[driver], + ttl=timedelta(days=1), + schema=[ + Field(name="conv_rate", dtype=Float32), + Field(name="acc_rate", dtype=Float32), + Field(name="avg_daily_trips", dtype=Int64), + ], + online=True, + source=driver_stats_push_source, # Changed from above + tags={"team": "driver_performance"}, +) + + +# Define an on demand feature view which can generate new features based on +# existing feature views and RequestSource features +@on_demand_feature_view( + sources=[driver_stats_fresh_fv, input_request], # relies on fresh version of FV + schema=[ + Field(name="conv_rate_plus_val1", dtype=Float64), + Field(name="conv_rate_plus_val2", dtype=Float64), + ], +) +def transformed_conv_rate_fresh(inputs: pd.DataFrame) -> pd.DataFrame: + df = pd.DataFrame() + df["conv_rate_plus_val1"] = inputs["conv_rate"] + inputs["val_to_add"] + df["conv_rate_plus_val2"] = inputs["conv_rate"] + inputs["val_to_add_2"] + return df + + +driver_activity_v3 = FeatureService( + name="driver_activity_v3", + features=[driver_stats_fresh_fv, transformed_conv_rate_fresh], +) diff --git a/examples/rbac-remote/server/feature_repo/feature_store.yaml b/examples/rbac-remote/server/feature_repo/feature_store.yaml new file mode 100644 index 0000000000..78b13c660b --- /dev/null +++ b/examples/rbac-remote/server/feature_repo/feature_store.yaml @@ -0,0 +1,26 @@ +project: server +provider: local +registry: + registry_type: sql + path: postgresql+psycopg://feast:feast@postgresql.feast-dev.svc.cluster.local:5432/feast + cache_ttl_seconds: 60 + sqlalchemy_config_kwargs: + echo: false + pool_pre_ping: true +online_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +offline_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/feature_repo/permissions_apply.py b/examples/rbac-remote/server/feature_repo/permissions_apply.py new file mode 100644 index 0000000000..93bdf2ffc6 --- /dev/null +++ b/examples/rbac-remote/server/feature_repo/permissions_apply.py @@ -0,0 +1,21 @@ +from feast.feast_object import ALL_RESOURCE_TYPES +from feast.permissions.action import READ, AuthzedAction, ALL_ACTIONS +from feast.permissions.permission import Permission +from feast.permissions.policy import RoleBasedPolicy + +admin_roles = ["feast-admin-role"] +user_roles = ["feast-user-role"] + +user_perm = Permission( + name="feast_user_permission", + types=ALL_RESOURCE_TYPES, + policy=RoleBasedPolicy(roles=user_roles), + actions=[AuthzedAction.DESCRIBE] + READ +) + +admin_perm = Permission( + name="feast_admin_permission", + types=ALL_RESOURCE_TYPES, + policy=RoleBasedPolicy(roles=admin_roles), + actions=ALL_ACTIONS +) diff --git a/examples/rbac-remote/server/k8s/feature_store_offline.yaml b/examples/rbac-remote/server/k8s/feature_store_offline.yaml new file mode 100644 index 0000000000..4fc01508bd --- /dev/null +++ b/examples/rbac-remote/server/k8s/feature_store_offline.yaml @@ -0,0 +1,16 @@ +project: server +provider: local +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +offline_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +auth: + type: kubernetes +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/k8s/feature_store_online.yaml b/examples/rbac-remote/server/k8s/feature_store_online.yaml new file mode 100644 index 0000000000..aa167731b2 --- /dev/null +++ b/examples/rbac-remote/server/k8s/feature_store_online.yaml @@ -0,0 +1,20 @@ +project: server +provider: local +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +online_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +offline_store: + type: remote + host: feast-offline-server-feast-feature-server.feast-dev.svc.cluster.local + port: 80 +auth: + type: kubernetes +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/k8s/feature_store_registry.yaml b/examples/rbac-remote/server/k8s/feature_store_registry.yaml new file mode 100644 index 0000000000..579141fb01 --- /dev/null +++ b/examples/rbac-remote/server/k8s/feature_store_registry.yaml @@ -0,0 +1,12 @@ +project: server +provider: local +registry: + registry_type: sql + path: postgresql+psycopg://feast:feast@postgresql.feast-dev.svc.cluster.local:5432/feast + cache_ttl_seconds: 60 + sqlalchemy_config_kwargs: + echo: false + pool_pre_ping: true +auth: + type: kubernetes +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/k8s/server_resources.yaml b/examples/rbac-remote/server/k8s/server_resources.yaml new file mode 100644 index 0000000000..03e35495d6 --- /dev/null +++ b/examples/rbac-remote/server/k8s/server_resources.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: feast-sa + namespace: feast-dev +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: feast-cluster-role +rules: + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["roles", "rolebindings", "clusterrolebindings"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: feast-cluster-rolebinding +subjects: + - kind: ServiceAccount + name: feast-sa + namespace: feast-dev +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: feast-cluster-role diff --git a/examples/rbac-remote/server/oidc/feature_store_offline.yaml b/examples/rbac-remote/server/oidc/feature_store_offline.yaml new file mode 100644 index 0000000000..8ed4cc1ff3 --- /dev/null +++ b/examples/rbac-remote/server/oidc/feature_store_offline.yaml @@ -0,0 +1,18 @@ +project: server +provider: local +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +offline_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +auth: + type: oidc + auth_discovery_url: https://keycloak-feast-dev.apps.com/realms/feast-rbac/.well-known/openid-configuration + client_id: feast-client +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/oidc/feature_store_online.yaml b/examples/rbac-remote/server/oidc/feature_store_online.yaml new file mode 100644 index 0000000000..c47c3a0662 --- /dev/null +++ b/examples/rbac-remote/server/oidc/feature_store_online.yaml @@ -0,0 +1,22 @@ +project: server +provider: local +registry: + registry_type: remote + path: feast-registry-server-feast-feature-server.feast-dev.svc.cluster.local:80 +online_store: + type: postgres + host: postgresql.feast-dev.svc.cluster.local + port: 5432 + database: feast + db_schema: public + user: feast + password: feast +offline_store: + type: remote + host: feast-offline-server-feast-feature-server.feast-dev.svc.cluster.local + port: 80 +auth: + type: oidc + auth_discovery_url: https://keycloak-feast-dev.apps.com/realms/feast-rbac/.well-known/openid-configuration + client_id: feast-client +entity_key_serialization_version: 2 diff --git a/examples/rbac-remote/server/oidc/feature_store_registry.yaml b/examples/rbac-remote/server/oidc/feature_store_registry.yaml new file mode 100644 index 0000000000..a661d9dc56 --- /dev/null +++ b/examples/rbac-remote/server/oidc/feature_store_registry.yaml @@ -0,0 +1,14 @@ +project: server +provider: local +registry: + registry_type: sql + path: postgresql+psycopg://feast:feast@postgresql.feast-dev.svc.cluster.local:5432/feast + cache_ttl_seconds: 60 + sqlalchemy_config_kwargs: + echo: false + pool_pre_ping: true +auth: + type: oidc + auth_discovery_url: https://keycloak-feast-dev.apps.com/realms/feast-rbac/.well-known/openid-configuration + client_id: feast-client +entity_key_serialization_version: 2 diff --git a/infra/charts/feast-feature-server/templates/deployment.yaml b/infra/charts/feast-feature-server/templates/deployment.yaml index 8dddeed6fd..dc62be8b95 100644 --- a/infra/charts/feast-feature-server/templates/deployment.yaml +++ b/infra/charts/feast-feature-server/templates/deployment.yaml @@ -21,6 +21,7 @@ spec: labels: {{- include "feast-feature-server.selectorLabels" . | nindent 8 }} spec: + serviceAccountName: {{ .Values.serviceAccount.name | default "default" }} {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} diff --git a/infra/charts/feast-feature-server/values.yaml b/infra/charts/feast-feature-server/values.yaml index 64d805a66c..22bbdeace0 100644 --- a/infra/charts/feast-feature-server/values.yaml +++ b/infra/charts/feast-feature-server/values.yaml @@ -44,6 +44,9 @@ service: type: ClusterIP port: 80 +serviceAccount: + name: "" + resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little diff --git a/sdk/python/feast/permissions/client/auth_client_manager_factory.py b/sdk/python/feast/permissions/client/auth_client_manager_factory.py index 3dff5fb45d..359072f38e 100644 --- a/sdk/python/feast/permissions/client/auth_client_manager_factory.py +++ b/sdk/python/feast/permissions/client/auth_client_manager_factory.py @@ -1,7 +1,11 @@ +import os +from typing import cast + from feast.permissions.auth.auth_type import AuthType from feast.permissions.auth_model import ( AuthConfig, KubernetesAuthConfig, + OidcAuthConfig, OidcClientAuthConfig, ) from feast.permissions.client.auth_client_manager import AuthenticationClientManager @@ -15,8 +19,15 @@ def get_auth_client_manager(auth_config: AuthConfig) -> AuthenticationClientManager: if auth_config.type == AuthType.OIDC.value: - assert isinstance(auth_config, OidcClientAuthConfig) - return OidcAuthClientManager(auth_config) + intra_communication_base64 = os.getenv("INTRA_COMMUNICATION_BASE64") + # If intra server communication call + if intra_communication_base64: + assert isinstance(auth_config, OidcAuthConfig) + client_auth_config = cast(OidcClientAuthConfig, auth_config) + else: + assert isinstance(auth_config, OidcClientAuthConfig) + client_auth_config = auth_config + return OidcAuthClientManager(client_auth_config) elif auth_config.type == AuthType.KUBERNETES.value: assert isinstance(auth_config, KubernetesAuthConfig) return KubernetesAuthClientManager(auth_config)