Deploying Self-Hosted Dify on Google Kubernetes Engine (GKE) with Helm

Dify

Dify

Dify is a great open-source tool that we use at Up Learn to build LLM applications. Running it on GKE gives you scalability, flexibility, and control of your data.
I recently had to figure out how to self-host it on Google Kubernetes Engine, and since I went through the details, I thought I’d save you the trouble.


Setup Characteristics

Before we start, here’s what we’ll be doing — so you know if this matches your requirements:

  • Deploy Boris Polonsky’s community helm chart in a GKE cluster, in a namespace called dify.
  • Use Google Cloud Storage (GCS) for external storage, since GKE doesn’t provide ReadWriteMany volumes natively.
  • Expose the deployment to the web with GKE’s ingress, but with a small config modification so the health check works.
  • Keep things simple by deploying everything in-cluster (Postgres, Redis, etc.) rather than relying on external services.

This will help you get the most complicated part of deploying Dify out of the way.
If you need to use an external database or set up Weaviate backups, you can easily do so by following the logic implemented here and checking Boris’ values.yaml instructions.

Getting Prerequisites Ready

Before deploying, we need to:

  1. Create a service account to access the GCS bucket.
  2. Create the GCS bucket.
  3. Fill up the values files — lots of credentials, configs, and keys to set.

Creating the Service Account for the GCS Bucket

Using the Console (UI)

  1. Console → IAM & AdminCreate Service Account
  2. Assign no roles
  3. After creation, click the service account → Keys tabAdd keyCreate New Key → JSON
  4. The JSON will be downloaded to your machine
  5. Get the base64 of that JSON. You’ll copy it into the values file later. Example:
kubectl create secret generic test \
  --from-file=<json-file-location> \
  --dry-run=client -o yaml

Using the gcloud CLI

# Create service account
gcloud iam service-accounts create dify-gcs-sa \
  --description="Service account for Dify GCS bucket access" \
  --display-name="dify-gcs-sa"

# Create key
gcloud iam service-accounts keys create dify-gcs-key.json \
  --iam-account=dify-gcs-sa@<PROJECT_ID>.iam.gserviceaccount.com

# Create a Kubernetes secret from the key
kubectl create secret generic test \
  --from-file=dify-gcs-key.json \
  --dry-run=client -o yaml

Creating the GCS Bucket

Using the Console (UI)

  1. Console → StorageBucketsCreate
  2. Enter a unique bucket name (e.g., dify-bucket-demo)
  3. Choose a region (match your GKE cluster region for best performance)
  4. Choose Standard storage class (or another, depending on cost/performance needs)
  5. Set Access control: leave default (“Uniform”) unless you need fine-grained ACLs
  6. Click Create
  7. Grant your service account access:

    • Go to Buckets → select your bucket → Permissions
    • Add Principal: dify-gcs-sa@<PROJECT_ID>.iam.gserviceaccount.com
    • Grant role: Storage Object Admin

Using the gcloud CLI

# Create the bucket
gcloud storage buckets create gs://<BUCKET-NAME> \
  --project <PROJECT_ID> \
  --location <LOCATION> \
  --uniform-bucket-level-access

# Bind IAM permissions
gcloud storage buckets add-iam-binding gs://<BUCKET-NAME> \
  --member="serviceAccount:<SA>@<PROJECT_ID>.iam.gserviceaccount.com" \
  --role="roles/storage.objectAdmin"

Filling Up Values Files

Here’s the values file we will be using. Save it as values.yaml. You can customize any field, but pay extra attention to the values between angle brackets (<>).
For secret keys you can use:

openssl rand -base64 42
api:
  enabled: true
  replicas: 1
  service:
    port: 5001
  logLevel: WARNING
  url:
    consoleApi: "https://<your-url>" # e.g https:// michel-dify.mydomain.com
    consoleWeb: "https://<your-url>" # you can repeat the same URL for all services if you want
    serviceApi: "https://<your-url>"
    appApi: "https://<your-url>"
    appWeb: "https://<your-url>"
    files: "https://<your-url>"
    marketplaceApi: "https://marketplace.dify.ai"
    marketplace: "https://marketplace.dify.ai"  
  secretKey: "<api-secret-key>"  # Generate one with `openssl rand -base64 42`.
#  mail: # I'm putting mail here but I'm commenting it. It's a nice to have, but it's not mandatory
#    type: smtp
#    defaultSender: "<The name you want for your sender>"
#    smtp:
#      server: "<your-smtp-server>" e.g smtp.gmail.com
#      port: 465 if gmail
#      username: "<[email protected]>"
#      password: "<your-smtp-user-password>"
#      tls:
#        enabled: true
#        optimistic: false

worker:
  enabled: true
  replicas: 1
  logLevel: WARNING

proxy:
  enabled: true
  replicas: 1
  service:
    port: 80

web:
  enabled: true
  replicas: 1
  extraEnv:
  - name: EDITION
    value: "SELF_HOSTED"
  service:
    port: 3000

sandbox:
  enabled: true
  replicas: 1
  extraEnv:
  - name: WORKER_TIMEOUT
    value: "15"
  service:
    port: 8194
  auth:
    apiKey: "dify-sandbox"

pluginDaemon:
  enabled: true
  replicas: 1
  auth:
    serverKey: "<plugin-daemon-server-key>" # Generate one with `openssl rand -base64 42`.
    difyApiKey: "<plugin-daemon-dify-api-server-key>" # Generate one with `openssl rand -base64 42`.
  persistence:
    mountPath: "/app/storage"
    persistentVolumeClaim:
      existingClaim: ""
      storageClass:
      accessModes: ReadWriteOnce # This is set as ReadWriteMany by default, but since we are on GKE we need it to be ReadWriteOnce, with a single replica
      size: 10Gi
      subPath: ""

postgresql:
  enabled: true
  name: postgres
  global:
    storageClass: ""
    postgresql:
      auth:
        postgresPassword: "<postgres-password>" # Generate your own
        username: ""
        password: ""
        database: "dify"
        replicationPassword: "<replica-password>" # Generate your own
  image:
    registry: docker.io
    repository: bitnami/postgresql
    tag: 15.3.0-debian-11-r7
    pullPolicy: IfNotPresent
  architecture: replication
  primary:
    resources:
      limits: {}
      requests: {}
    persistence:
      enabled: true
      storageClass: ""
      accessModes:
        - ReadWriteOnce
      size: 10Gi

weaviate:
  enabled: true
  authentication:
    anonymous_access:
      enabled: false
    apikey:
      enabled: true
      allowed_keys:
        - "<weviate-allowed-key>" # Generate one with `openssl rand -base64 42`.
      users:
        - [email protected]
    oidc:
      enabled: false
  authorization:
    admin_list:
      enabled: true
      users:
      - [email protected]
      read_only_users:
  env:
    QUERY_MAXIMUM_RESULTS: 100000
    AUTHENTICATION_APIKEY_ENABLED: "true"
    AUTHENTICATION_APIKEY_ALLOWED_KEYS: "<weviate-allowed-key>" # Same as the one you specified above
    AUTHENTICATION_APIKEY_USERS: "[email protected]"
    AUTHORIZATION_ADMINLIST_ENABLED: "true"
    AUTHORIZATION_ADMINLIST_USERS: "[email protected]"

service:
  type: ClusterIP
  port: 80

redis:
  enabled: true
  auth:
    enabled: true
    sentinel: true
    password: "<redis-password>" # Generate your own
    existingSecret: ""
    existingSecretPasswordKey: ""
    usePasswordFiles: false

externalPostgres:
  enabled: false

externalGCS:
  enabled: true
  bucketName:
    api: "dify-prod"
  serviceAccountJsonBase64: "<service-account-json>" # The base64 you got from the serviceaccount json in the earlier step

Deploying

Once prerequisites are done, deploying is as simple as:

helm repo add dify https://borispolonsky.github.io/dify-helm
helm repo update
helm upgrade --install dify dify/dify \
  -n dify \
  -f values.yaml \
  --create-namespace

Configuring Ingress (Exposing the App)

To expose your Dify deployment to the web, we’ll configure a GKE Ingress.
Because of how health checks work in GKE, we’ll also apply a small tweak using a BackendConfig.

1. BackendConfig and FrontendConfig

Save as gce-configs.yaml:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: dify-backend-config
  namespace: dify
spec:
  healthCheck:
    checkIntervalSec: 15
    port: 80
    type: HTTP
    requestPath: /apps
  timeoutSec: 30
  connectionDraining:
    drainingTimeoutSec: 60
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: dify-frontend-config
  namespace: dify
spec:
  redirectToHttps:
    enabled: true
    responseCodeName: MOVED_PERMANENTLY_DEFAULT

Apply it:

kubectl apply -f gce-configs.yaml -n dify

2. Annotate the Service

kubectl annotate service dify \
  cloud.google.com/backend-config='{"default":"dify-backend-config"}' \
  --overwrite -n dify

3. Create the Ingress

Save as ingress.yaml (replace <YOUR-URL>):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dify-ingress
  namespace: dify
spec:
  rules:
  - host: <YOUR-URL>
    http:
      paths:
      - backend:
          service:
            name: dify
            port:
              number: 80
        path: /*
        pathType: ImplementationSpecific
#  tls: # If you use https
#  - secretName: <your-tls-secret>

Apply:

kubectl apply -f ingress.yaml -n dify

4. Wait for IP and Configure DNS

kubectl get ingress -n dify

After a few minutes, GCP will assign a public IP to the ingress. Point your DNS A record to this IP.

And we’re done!

That’s it! You should now be able to access, configure, and start using Dify on your GKE cluster.
If this helped you in any way, or if you spotted any issues, let me know!