🎬 DEMO BUILD — All data is dummy/simulated. Use this for presentations and testing the UI flow.

Deploy OKE clusters
in minutes, not hours

Self-service Oracle Kubernetes Engine provisioning for labs and enterprise clusters. Configure, deploy and tear down your infrastructure with a few clicks.

OKE
Cluster type
Multi-pool
Architecture
Configurable
Node count
Any Shape
OCI Compute
One-click provisioning
Fill in your details, pick a CIDR range and node config. Terraform handles everything — including compartment creation — automatically.
Keycloak authentication
Secure login via your organisation's Keycloak instance. Each user sees only their own clusters and resources.
Clean teardown
Destroy your cluster when done. All OCI resources, compartments and CIDR allocations are released automatically.
New Cluster
Self-service OKE provisioning · OCI
🔒
Cluster limit reached
You have reached the maximum number of clusters allowed for your account. Destroy an existing cluster to free up a slot, or contact support to request a higher limit.
Standard
Advanced
Have an existing VCN, compartment or subnet? Configure them in the Advanced tab.
What does this mean?
Identity
Used to name the cluster and OCI resources · your identity is managed via Keycloak
Network
Each lab gets an isolated /24 subnet · released on destroy
Child compartment auto-named from Cluster Name if left blank
Kubernetes
Compute
Available shapes loaded from OCI tenancy configuration
Node Pools
pool-1
pool-2
ℹ️ Max 3 node pools · max 3 nodes per pool · max 1 OCPU · 12 GB RAM · 50 GB storage per node. Need more? Contact Infragate Support
ℹ️ Enterprise overrides. Leave fields blank to let Infragate create resources automatically. Provide OCIDs to reuse existing OCI infrastructure — Terraform will skip creation and attach to your existing resources instead.
Virtual Cloud Network
Leave blank — a new VCN will be created automatically. Provide an OCID to attach to an existing VCN.
Compartment
Leave blank — a child compartment is created from Cluster Name. Provide an OCID to deploy into an existing compartment.
Subnet
Leave blank — a private subnet is created within the VCN. Provide an OCID to place worker nodes in an existing subnet.
⚠️ When using an existing subnet, ensure the CIDR range selected in the Standard tab does not overlap with existing allocations in your VCN.
Running terraform…
🗺  Architecture Preview
OCI Region
TENANCY / —
VCN · —
Private Subnet · —
<name>-cluster
OKE · v1.32
OCI-Managed Control Plane
HA
📋  Deployment Summary
Cluster Name
not set
CIDR Range
not selected
K8s Version
1.32
Node Pools
2
VM Shape
VM.Standard.E4.Flex
Total Nodes
TerraformCompartment auto-created · State in OCI Object Storage
My Clusters
Manage your OKE environments
3
Total
2
Online
1
Provisioning
0
Failed
oke-demo-1-cluster
Online
v1.32
compartment: lab-demo-1 · created 2 hours ago
☸  Cluster Info
Cluster nameoke-demo-1-cluster
OCIDocid1.cluster.oc1…aaaa1234 📋
Compartmentlab-demo-1
K8s versionv1.32
VM shapeVM.Standard.E4.Flex
Created2025-06-10 14:32 UTC
🌐  Network Info
VCN CIDR10.0.0.0/16
Subnet CIDR10.120.10.0/24
API Endpoint10.120.10.1:6443 📋
LB IP158.101.42.77 📋
Region— cluster.region —
AD spreadAD-1, AD-2
⬡  Node Pools
demo-1-pool-1 AD-1
Nodes3 / 3 running
OCPU / node1
RAM / node12 GB
Storage / node50 GB
demo-1-pool-2 AD-2
Nodes2 / 2 running
OCPU / node1
RAM / node8 GB
Storage / node30 GB
🔐  Access
Kubeconfig
Use this to connect with kubectl.
apiVersion: v1
clusters:
- cluster:
    server: https://10.120.10.1:6443
    certificate-authority-data: LS0tLS1CRUdJTi…
  name: oke-demo-1-cluster
contexts:
- context:
    cluster: oke-demo-1-cluster
    user: user-oke-demo-1-cluster
  name: oke-demo-1-cluster
current-context: oke-demo-1-cluster
kind: Config
users:
- name: user-oke-demo-1-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: oci
      args: [ce, cluster, generate-token, --cluster-id, ocid1.cluster…]
SSH Key
Private key for direct SSH access to cluster nodes.
Save this now. For security, we recommend downloading the key and storing it safely. It may not be available after your session.
All Clusters
Full visibility across all user clusters · Demo data
3
Total clusters
2
Online
1
Provisioning
14/24
CIDRs used
Clusters
StatusCluster nameUserCIDRK8sTierResourcesCreated
Online oke-demo-1-cluster demo-1 10.120.10.0/24 v1.32 Basic 2 pools · 5 nodes 2h ago
Error oke-demo-2-cluster demo-2 10.120.20.0/24 v1.31 Enhanced 1 pool · 1 node 5h ago
Provisioning oke-demo-3-cluster demo-3 10.120.30.0/24 v1.32 Basic 3 pools · 6 nodes 4m ago
User Management
Manage user access, cluster limits and account status
Users
UsernameEmailStatusClustersCluster limitResource overridesTierJoined
demo-user-1 [email protected] Active 1
1
Global defaults Basic 2024-03-01
demo-user-2 [email protected] Active 1
3
pools: 5nodes: 10RAM: 64 GB
Enhanced 2024-03-02
demo-user-3 [email protected] Suspended 1
2
Global defaults Basic 2024-03-05
Configuration
OCI settings, CIDR pool and resource limits · Demo data
CIDR Pool 3 of 5 allocated
Available /24 subnets to allocate to new clusters. Blue = allocated, white = available.
10.120.10.0/24 × 10.120.20.0/24 × 10.120.30.0/24 × 10.120.40.0/24 × 10.120.50.0/24 ×
OCI Settings
Available VM Shapes — shown in cluster provisioning form
Define which OCI VM shapes are available to users when provisioning clusters.
VM.Standard.E4.Flex
AMD · Flex OCPU/RAM
Enabled
VM.Standard.A1.Flex
ARM Ampere · cost-efficient
Enabled
VM.Standard3.Flex
Intel · Flex OCPU/RAM
Disabled
Keycloak / Authentication
Cluster Tier — applies to all new clusters
Choose the OKE cluster tier for all newly provisioned clusters. Existing clusters are not affected. Basic is free. Enhanced (~$0.10/hr per cluster) enables full in-place node scaling and cluster autoscaler support.
Basic — OKE managed control plane, free tier. Node scaling via API updates desired count; existing nodes require manual cycling via OCI Console.
Global Resource Limits
Available K8s Versions
v1.32 ×v1.31 ×v1.30 ×
Audit Log
Full history of all Terraform operations across all users
Operations
TimestampUserOperationClusterStatusDurationLogs
2026-03-08 14:32:01 demo-1 deploy oke-demo-1-cluster success 4m 12s
2026-03-08 11:15:44 demo-2 deploy oke-demo-2-cluster success 3m 58s
2026-03-08 15:28:10 demo-3 deploy oke-demo-3-cluster running
2026-03-07 09:04:22 demo-2 scale oke-demo-2-cluster success 1m 03s
2026-03-06 17:51:39 demo-1 destroy oke-demo-old failed 2m 05s