🎬DEMO — Simulated environment. No real OCI resources are created or modified.
Deploy OKE clusters in minutes, not hours
Infragate gives your team self-service Oracle Kubernetes Engine provisioning — configure, deploy, scale and destroy OCI clusters through a clean portal, no CLI or Terraform knowledge required.
OKE
Cluster Type
Modular
Architecture
Multi-pool
Node Pool Support
Configurable
Node Count
~5 min
Avg. Deploy Time
Terraform
Powered By
Deploy in minutes
Pick a name, CIDR and node configuration. Infragate runs Terraform under the hood — VCN, subnets, node pools and kubeconfig, all provisioned automatically.
Secure by default
Keycloak-backed SSO keeps every team member isolated — users see only their own clusters. Admins control limits, shapes and tier access from a central config panel.
Full lifecycle control
Scale node pools up or down, swap configurations, and destroy clusters when done. CIDR ranges are reclaimed automatically and made available for re-use.
🔑 Admin deploy mode — resource limit bypassed.
New Cluster
Self-service OKE provisioning · OCI
🔑 Admin action — deploying as admin bypasses user cluster limits.
🔒
Cluster limit reached
You have reached the maximum number of clusters allowed for your account. Destroy an existing cluster to free up a slot, or contact support to request a higher limit.
Standard
Advanced ✓
Have an existing VCN, compartment or subnet? Configure them in the Advanced tab.
What does this mean?►
Identity
Used to name the cluster and OCI resources · your identity is managed via Keycloak
Network
Each lab gets an isolated /24 subnet · released on destroy
Child compartment auto-named from Cluster Name if left blank
Kubernetes
Compute
Available shapes loaded from OCI tenancy configuration
Node Pools
pool-1
pool-2
ℹ️ Max 3 node pools · max 3 nodes per pool · max 1 OCPU · 12 GB RAM · 50 GB storage per node.
Need more? Contact Infragate Support
ℹ️ Enterprise overrides. Leave fields blank to let Infragate create resources automatically. Provide OCIDs to reuse existing OCI infrastructure — Terraform will skip creation and attach to your existing resources instead.
Virtual Cloud Network
Leave blank — a new VCN will be created automatically. Provide an OCID to attach to an existing VCN.
Compartment
Leave blank — a child compartment is created from Cluster Name. Provide an OCID to deploy into an existing compartment.
Subnet
Leave blank — a private subnet is created within the VCN. Provide an OCID to place worker nodes in an existing subnet.
⚠️ When using an existing subnet, ensure the CIDR range selected in the Standard tab does not overlap with existing allocations in your VCN.
Running terraform…
🗺 Architecture Preview
OCI Region
TENANCY / —
VCN · —
Private Subnet · —
☸
<name>-cluster
OKE · v1.32
OCI-Managed Control Plane
HA
📋 Deployment Summary
Cluster Name
not set
CIDR Range
not selected
K8s Version
1.32
Node Pools
2
VM Shape
VM.Standard.E4.Flex
Total Nodes
—
TerraformCompartment auto-created · State in OCI Object Storage
Private key for direct SSH access to cluster nodes.
⚠ Save this now. For security, we recommend downloading the key and storing it safely. It may not be available after your session.
Running terraform plan -destroy…
Calculating resources to be removed. This may take a few seconds.
⚠ Destroy plan —
The following resources will be permanently deleted by terraform destroy. This cannot be undone.
Destroying cluster…
Running terraform destroy…
Running terraform init & plan…
Preparing your deployment plan. This may take a few seconds.
Running terraform plan…
Refreshing state and computing diff. This may take a few seconds.
Review deployment plan
Review what will be created via terraform apply before confirming.
Scale cluster —
Adjust resources and node counts. Click Preview plan to review changes before applying.
ℹ️ Basic cluster — scaling updates the desired node count in OKE, but existing nodes are not automatically replaced. To apply changes to running nodes you must manually cycle them via the OCI Console: OKE → Cluster → Node Pool → Nodes → Cordon & drain → Terminate. New nodes will be provisioned automatically with the updated config.
⚠️ Basic cluster — API scaling adjusts desired node count only. Existing nodes require manual cycling. Upgrade to Enhanced for full in-place scaling.
These changes will be applied via terraform apply. Review carefully before confirming.
Applying changes…
Running terraform apply…
Need more resources?
Your request exceeds the self-service limits for your account:
Current limits per user:1 cluster Current limits per cluster:3 node pools · 3 nodes per pool Current limits per node:1 OCPU · 12 GB RAM · 50 GB storage
To request higher limits, contact the Infragate Support team and include your use case, estimated resource requirements and your Keycloak user ID.
All Clusters
Full visibility across all user clusters · Demo data
3
Total clusters
2
Online
1
Provisioning
3/24
CIDRs used
Clusters
Status
Cluster name
User
CIDR
K8s
Tier
Resources
Created
Actions
Online
oke-demo-1-cluster
demo-1
10.120.10.0/24
v1.32
Basic
2 pools · 5 nodes
2h ago
Error
oke-demo-2-cluster
demo-2
10.120.20.0/24
v1.31
Enhanced
1 pool · 1 node
5h ago
Provisioning
oke-demo-3-cluster
demo-3
10.120.30.0/24
v1.32
Basic
3 pools · 6 nodes
4m ago
User Management
Manage user access, cluster limits and account status
Users
Username
Email
Status
Clusters
Cluster limit
Resource overrides
Joined
demo-user-1
user1@infragate.cloud
Active
1
1
Global defaults
2024-03-01
demo-user-2
user2@infragate.cloud
Active
1
3
pools: 5nodes: 10RAM: 64 GB
2024-03-02
demo-user-3
user3@infragate.cloud
Suspended
1
2
Global defaults
2024-03-05
User overrides — —
Set limits for this user. Leave a field blank to inherit the global default. Active overrides take precedence over platform-wide settings.
Available VM Shapes
— shown in cluster provisioning form
Define which OCI VM shapes are available to users when provisioning clusters.
VM.Standard.E4.Flex
AMD · Flex OCPU/RAM
Enabled
VM.Standard.A1.Flex
ARM Ampere · cost-efficient
Enabled
VM.Standard3.Flex
Intel · Flex OCPU/RAM
Disabled
Keycloak / Authentication
Cluster Tier
— applies to all new clusters
Choose the OKE cluster tier for all newly provisioned clusters. Existing clusters are not affected.
Basic is free. Enhanced (~$0.10/hr per cluster) enables full in-place node scaling and cluster autoscaler support.
Basic — OKE managed control plane, free tier. Node scaling via API updates desired count; existing nodes require manual cycling via OCI Console.
Enhanced — full in-place node scaling, cluster autoscaler support, SLA-backed control plane. Billed at ~$0.10/hr per cluster.
Global Resource Limits
Available K8s Versions
v1.32 ×v1.31 ×v1.30 ×
Audit Log
Full history of all Terraform operations across all users
Operations
Timestamp
User
Operation
Cluster
Status
Duration
Logs
2026-03-08 14:32:01
demo-1
deploy
oke-demo-1-cluster
success
4m 12s
2026-03-08 11:15:44
demo-2
deploy
oke-demo-2-cluster
success
3m 58s
2026-03-08 15:28:10
demo-3
deploy
oke-demo-3-cluster
running
—
2026-03-07 09:04:22
demo-2
scale
oke-demo-2-cluster
success
1m 03s
2026-03-06 17:51:39
demo-1
destroy
oke-demo-old
failed
2m 05s
Job logs
—
Select a job entry to view its logs.
YPE html>
Infragate — DEMO
🎬DEMO — Simulated environment. No real OCI resources are created or modified.
Your OKE clusters, on demand
Infragate gives your team self-service Oracle Kubernetes Engine provisioning — configure, deploy, scale and destroy OCI clusters through a clean portal, no CLI or Terraform knowledge required.
OKE
Cluster Type
Multi-pool
Architecture
Multi-pool
Node Pool Support
Configurable
Node Count
~5 min
Avg. Deploy Time
Terraform
Powered By
Deploy in minutes
Pick a name, CIDR and node configuration. Infragate runs Terraform under the hood — VCN, subnets, node pools and kubeconfig, all provisioned automatically.
Secure by default
Keycloak-backed SSO keeps every team member isolated — users see only their own clusters. Admins control limits, shapes and tier access from a central config panel.
Full lifecycle control
Scale node pools up or down, swap configurations, and destroy clusters when done. CIDR ranges are reclaimed automatically and made available for re-use.
🔑 Admin deploy mode — resource limit bypassed.
New Cluster
Self-service OKE provisioning · OCI
🔑 Admin action — deploying as admin bypasses user cluster limits.
🔒
Cluster limit reached
You have reached the maximum number of clusters allowed for your account. Destroy an existing cluster to free up a slot, or contact support to request a higher limit.
Standard
Advanced ✓
Have an existing VCN, compartment or subnet? Configure them in the Advanced tab.
What does this mean?►
Identity
Used to name the cluster and OCI resources · your identity is managed via Keycloak
Network
Each lab gets an isolated /24 subnet · released on destroy
Child compartment auto-named from Cluster Name if left blank
Kubernetes
Compute
Available shapes loaded from OCI tenancy configuration
Node Pools
pool-1
pool-2
ℹ️ Max 3 node pools · max 3 nodes per pool · max 1 OCPU · 12 GB RAM · 50 GB storage per node.
Need more? Contact Infragate Support
ℹ️ Enterprise overrides. Leave fields blank to let Infragate create resources automatically. Provide OCIDs to reuse existing OCI infrastructure — Terraform will skip creation and attach to your existing resources instead.
Virtual Cloud Network
Leave blank — a new VCN will be created automatically. Provide an OCID to attach to an existing VCN.
Compartment
Leave blank — a child compartment is created from Cluster Name. Provide an OCID to deploy into an existing compartment.
Subnet
Leave blank — a private subnet is created within the VCN. Provide an OCID to place worker nodes in an existing subnet.
⚠️ When using an existing subnet, ensure the CIDR range selected in the Standard tab does not overlap with existing allocations in your VCN.
Running terraform…
🗺 Architecture Preview
OCI Region
TENANCY / —
VCN · —
Private Subnet · —
☸
<name>-cluster
OKE · v1.32
OCI-Managed Control Plane
HA
📋 Deployment Summary
Cluster Name
not set
CIDR Range
not selected
K8s Version
1.32
Node Pools
2
VM Shape
VM.Standard.E4.Flex
Total Nodes
—
TerraformCompartment auto-created · State in OCI Object Storage
Private key for direct SSH access to cluster nodes.
⚠ Save this now. For security, we recommend downloading the key and storing it safely. It may not be available after your session.
Running terraform plan -destroy…
Calculating resources to be removed. This may take a few seconds.
⚠ Destroy plan —
The following resources will be permanently deleted by terraform destroy. This cannot be undone.
Destroying cluster…
Running terraform destroy…
Running terraform init & plan…
Preparing your deployment plan. This may take a few seconds.
Running terraform plan…
Refreshing state and computing diff. This may take a few seconds.
Review deployment plan
Review what will be created via terraform apply before confirming.
Scale cluster —
Adjust resources and node counts. Click Preview plan to review changes before applying.
ℹ️ Basic cluster — scaling updates the desired node count in OKE, but existing nodes are not automatically replaced. To apply changes to running nodes you must manually cycle them via the OCI Console: OKE → Cluster → Node Pool → Nodes → Cordon & drain → Terminate. New nodes will be provisioned automatically with the updated config.
⚠️ Basic cluster — API scaling adjusts desired node count only. Existing nodes require manual cycling. Upgrade to Enhanced for full in-place scaling.
These changes will be applied via terraform apply. Review carefully before confirming.
Applying changes…
Running terraform apply…
Need more resources?
Your request exceeds the self-service limits for your account:
Current limits per user:1 cluster Current limits per cluster:3 node pools · 3 nodes per pool Current limits per node:1 OCPU · 12 GB RAM · 50 GB storage
To request higher limits, contact the Infragate Support team and include your use case, estimated resource requirements and your Keycloak user ID.
All Clusters
Full visibility across all user clusters · Demo data
3
Total clusters
2
Online
1
Provisioning
3/24
CIDRs used
Clusters
Status
Cluster name
User
CIDR
K8s
Tier
Resources
Created
Actions
Online
oke-demo-1-cluster
demo-1
10.120.10.0/24
v1.32
Basic
2 pools · 5 nodes
2h ago
Error
oke-demo-2-cluster
demo-2
10.120.20.0/24
v1.31
Enhanced
1 pool · 1 node
5h ago
Provisioning
oke-demo-3-cluster
demo-3
10.120.30.0/24
v1.32
Basic
3 pools · 6 nodes
4m ago
User Management
Manage user access, cluster limits and account status
Users
Username
Email
Status
Clusters
Cluster limit
Resource overrides
Joined
demo-user-1
user1@infragate.cloud
Active
1
1
Global defaults
2024-03-01
demo-user-2
user2@infragate.cloud
Active
1
3
pools: 5nodes: 10RAM: 64 GB
2024-03-02
demo-user-3
user3@infragate.cloud
Suspended
1
2
Global defaults
2024-03-05
User overrides — —
Set limits for this user. Leave a field blank to inherit the global default. Active overrides take precedence over platform-wide settings.
Available VM Shapes
— shown in cluster provisioning form
Define which OCI VM shapes are available to users when provisioning clusters.
VM.Standard.E4.Flex
AMD · Flex OCPU/RAM
Enabled
VM.Standard.A1.Flex
ARM Ampere · cost-efficient
Enabled
VM.Standard3.Flex
Intel · Flex OCPU/RAM
Disabled
Keycloak / Authentication
Cluster Tier
— applies to all new clusters
Choose the OKE cluster tier for all newly provisioned clusters. Existing clusters are not affected.
Basic is free. Enhanced (~$0.10/hr per cluster) enables full in-place node scaling and cluster autoscaler support.
Basic — OKE managed control plane, free tier. Node scaling via API updates desired count; existing nodes require manual cycling via OCI Console.
Enhanced — full in-place node scaling, cluster autoscaler support, SLA-backed control plane. Billed at ~$0.10/hr per cluster.
Global Resource Limits
Available K8s Versions
v1.32 ×v1.31 ×v1.30 ×
Audit Log
Full history of all Terraform operations across all users