prod{access,vider}: implement

Prodaccess/Prodvider allow issuing short-lived certificates for all SSO
users to access the kubernetes cluster.

Currently, all users get a personal-$username namespace in which they
have adminitrative rights. Otherwise, they get no access.

In addition, we define a static CRB to allow some admins access to
everything. In the future, this will be more granular.

We also update relevant documentation.

Change-Id: Ia18594eea8a9e5efbb3e9a25a04a28bbd6a42153
diff --git a/README b/README
index 902b12c..292be56 100644
--- a/README
+++ b/README
@@ -13,24 +13,9 @@
     tools/install.sh # build tools
 
 
-Then, to get Kubernets:
+Then, to get Kubernetes access to k0.hswaw.net (current nearly-production cluster):
 
-    echo "185.236.240.36 k0.hswaw.net" >> /etc/hosts # temporary hack until we get loadbalancers working
-    bazel run //cluster/clustercfg:clustercfg admincreds $(whoami)-admin # get administrative creds (valid for 5 days)
+    prodaccess
     kubectl version
 
-Clusters
-========
-
-The following kubernetes clusters are available:
-
-k0.hswaw.net
-------------
-
-3 nodes (bc01n{01,02,03}.hswaw.net), mixed worker/master.
-
-No persistent storage (yet).
-
-Temporary development cluster. Will become base production cluster once configuration is done, but will *likely be fully cleared*.
-
-Feel free to use for tests, but your pods might disappear at any time.
+You will automatically get a `personal-$USERNAME` namespace created in which you have full admin rights.
diff --git a/WORKSPACE b/WORKSPACE
index bc9c217..47e7d02 100644
--- a/WORKSPACE
+++ b/WORKSPACE
@@ -690,3 +690,15 @@
     commit = "68ac5879751a7105834296859f8c1bf70b064675",
     importpath = "github.com/sethvargo/go-password",
 )
+
+go_repository(
+    name = "in_gopkg_ldap_v3",
+    commit = "9f0d712775a0973b7824a1585a86a4ea1d5263d9",
+    importpath = "gopkg.in/ldap.v3",
+)
+
+go_repository(
+    name = "in_gopkg_asn1_ber_v1",
+    commit = "f715ec2f112d1e4195b827ad68cf44017a3ef2b1",
+    importpath = "gopkg.in/asn1-ber.v1",
+)
diff --git a/cluster/README b/cluster/README
index e798e96..ae09fc7 100644
--- a/cluster/README
+++ b/cluster/README
@@ -6,33 +6,17 @@
 Accessing via kubectl
 ---------------------
 
-There isn't yet a service for getting short-term user certificates. Instead, you'll have to get admin certificates:
-
-    bazel run //cluster/clustercfg:clustercfg admincreds $(whoami)-admin
+    prodaccess # get a short-lived certificate for your use via SSO
     kubectl get nodes
 
-Provisioning nodes
+Persistent Storage
 ------------------
 
- - bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented)
- - `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net`
-
-That's it!
-
-Ceph
-====
-
-We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs.
-
-The following Ceph clusters are available:
-
-ceph-waw1
----------
-
 HDDs on bc01n0{1-3}. 3TB total capacity.
 
 The following storage classes use this cluster:
 
+ - `waw-hdd-paranoid-1` - 3 replicas
  - `waw-hdd-redundant-1` - erasure coded 2.1
  - `waw-hdd-yolo-1` - unreplicated (you _will_ lose your data)
  - `waw-hdd-redundant-1-object` - erasure coded 2.1 object store
@@ -49,3 +33,22 @@
 
 `tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd.
 Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`)
+
+Administration
+==============
+
+Provisioning nodes
+------------------
+
+ - bring up a new node with nixos, running the configuration.nix from bootstrap (to be documented)
+ - `bazel run //cluster/clustercfg:clustercfg nodestrap bc01nXX.hswaw.net`
+
+That's it!
+
+Ceph
+====
+
+We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs.
+
+The following Ceph clusters are available:
+
diff --git a/cluster/certs/BUILD.bazel b/cluster/certs/BUILD.bazel
new file mode 100644
index 0000000..ca15f0f
--- /dev/null
+++ b/cluster/certs/BUILD.bazel
@@ -0,0 +1,18 @@
+load("@io_bazel_rules_go//go:def.bzl", "go_library")
+load("@io_bazel_rules_go//extras:embed_data.bzl", "go_embed_data")
+
+go_embed_data(
+    name = "certs_data",
+    srcs = glob(["*.crt"]),
+    package = "certs",
+    flatten = True,
+)
+
+go_library(
+    name = "go_default_library",
+    srcs = [
+        ":certs_data",  # keep
+    ],
+    importpath = "code.hackerspace.pl/cluster/certs",
+    visibility = ["//visibility:public"],
+)
diff --git a/cluster/certs/ca-kube-prodvider.cert b/cluster/certs/ca-kube-prodvider.cert
new file mode 100644
index 0000000..e5ec6d9
--- /dev/null
+++ b/cluster/certs/ca-kube-prodvider.cert
@@ -0,0 +1,31 @@
+-----BEGIN CERTIFICATE-----
+MIIFQzCCBCugAwIBAgIUbcxmU7cMccTf/ERKgi0uDIKJRoEwDQYJKoZIhvcNAQEL
+BQAwgYMxCzAJBgNVBAYTAlBMMRQwEgYDVQQIEwtNYXpvd2llY2tpZTEPMA0GA1UE
+BxMGV2Fyc2F3MRswGQYDVQQKExJXYXJzYXcgSGFja2Vyc3BhY2UxEzARBgNVBAsT
+CmNsdXN0ZXJjZmcxGzAZBgNVBAMTEmt1YmVybmV0ZXMgbWFpbiBDQTAeFw0xOTA4
+MzAyMDI1MDBaFw0yMDA4MjkyMDI1MDBaMIGsMQswCQYDVQQGEwJQTDEUMBIGA1UE
+CBMLTWF6b3dpZWNraWUxDzANBgNVBAcTBldhcnNhdzEbMBkGA1UEChMSV2Fyc2F3
+IEhhY2tlcnNwYWNlMSowKAYDVQQLEyFrdWJlcm5ldGVzIHByb2R2aWRlciBpbnRl
+cm1lZGlhdGUxLTArBgNVBAMTJGt1YmVybmV0ZXMgcHJvZHZpZGVyIGludGVybWVk
+aWF0ZSBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAL/38OKQgrqI
+9WZKRubACVF1QUmZS9IIzcmmxsAJEvNwCirAr6Rx45G+uBlUx0PmHK+783Pa0WEO
+deTHpZZt5o6YrQGvEzkI9ckDraUjRcQEQewi3kygmAdPW6GMWZd7fjCjsEQ0Engc
+qJ7BkEWNfJYLh8VpEwPz1ClqFrlbHU55hbuvNNg3Ro0enFmTu3PPZYUIcdX3jyJz
+p/fsE7K/f2OhHG2ej0Ji2Ssz6Bo9bB6yHLMN1oYzGB5H8Xa5dQ6LqpU0wUBqtGC8
+06ZUfNA1gtpTOj+ApDX/OYucoOE422r1lT6SfgeBhHGN3xalcYyiPumFsCBUSq+B
+7oLRW3emWJcjlOdmhtx26yl5/XpONY8u/jPG56CnT3tNGPdYnpVQ/969NrKA7yd4
+TRA4rU6Nyg5f3x8Xrw5QPci5Uuz2X2feFy53x25i2tRT2fm5VabzdjsO9mXCZbl8
+BO8mLVJ4Ojw5ER/sIw/OME29+tcBL3j31OoBUAHo82ca4B0KJBCWDHrjDTlchFfT
+fQfFWuRluZaa1kGU/9hEuHe8wXNsMlkCW+68xZ5SXLX29ruhx7SoDk3+SMk1GMNv
+vZr6CjWer94OajPN+scW7Pol2mhqENWFsTDA0WFN0HwLjLna9vQJg6vZeobm3bWZ
+DWl93HqdKeINlp9Q0HQ7nR+LUkeodWf7AgMBAAGjgYMwgYAwDgYDVR0PAQH/BAQD
+AgGmMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTAD
+AQH/MB0GA1UdDgQWBBRpjeqS08ZAgwwhQZnMEmrNN2PdszAfBgNVHSMEGDAWgBSY
+Ml0OTzMe+wnpiSQTFkJqgNGZ0DANBgkqhkiG9w0BAQsFAAOCAQEAiVxVjz4vuN0w
+9mw56taa8AxOF4Cl18LEuxVnw6ugxG5ahlhZOssnv/HdDwoHdlbLw5ER2RTK0hFT
+whH76BkJOUwAZ+YggpnOFf5hUIf9e3Pfu5MtdSBJQ0LHPRY3QPP/gHEsQR0muXVd
+AIyTQZPuJ2M98bWgaZX4yrJ31jLjcNPFM7RXiIi1ZgTr7LTRCALoFm1Tw/kM5TE7
+2qYjcaeJO1X3Zon5UXJogYa/3JreKQlBhGZgHHNAQobmVNmJTEvOuPw/31ZWDKVR
+Qrv04QYFUwCNGdI1Bin1rk9lbsrTiEP2x8W5cwGPaa1MR45xTrrEYBrplUJXiCBQ
+kwCwP+xLBQ==
+-----END CERTIFICATE-----
diff --git a/cluster/clustercfg/ca.py b/cluster/clustercfg/ca.py
index 4d359c7..9ed2053 100644
--- a/cluster/clustercfg/ca.py
+++ b/cluster/clustercfg/ca.py
@@ -4,6 +4,7 @@
 import os
 from six import StringIO
 import subprocess
+import tempfile
 
 
 logger = logging.getLogger(__name__)
@@ -32,6 +33,20 @@
       "expiry": "168h"
     },
     "profiles": {
+      "intermediate": {
+        "expiry": "8760h",
+        "usages": [
+          "signing",
+          "key encipherment",
+          "cert sign",
+          "crl sign",
+          "server auth",
+          "client auth",
+        ],
+        "ca_constraint": {
+          "is_ca": True,
+        },
+      },
       "server": {
         "expiry": "8760h",
         "usages": [
@@ -156,12 +171,17 @@
 
         return key, csr
 
-    def sign(self, csr, save=None):
+    def sign(self, csr, save=None, profile='client-server'):
         logging.info("{}: Signing CSR".format(self))
         ca = self._cert
         cakey = self.ss.plaintext(self._secret_key)
+
+        config = tempfile.NamedTemporaryFile(mode='w')
+        json.dump(_ca_config, config)
+        config.flush()
+
         out = self._cfssl_call(['sign', '-ca=' + ca, '-ca-key=' + cakey,
-                                '-profile=client-server', '-'], stdin=csr)
+                                '-profile='+profile, '-config='+config.name, '-'], stdin=csr)
         cert = out['cert']
         if save is not None:
             name = os.path.join(self.cdir, save)
@@ -170,6 +190,7 @@
             f.write(cert)
             f.close()
 
+        config.close()
         return cert
 
     def upload(self, c, remote_cert):
@@ -181,7 +202,7 @@
 
 
 class ManagedCertificate(object):
-    def __init__(self, ca, name, hosts, o=None, ou=None):
+    def __init__(self, ca, name, hosts, o=None, ou=None, profile='client-server'):
         self.ca = ca
 
         self.hosts = hosts
@@ -190,6 +211,7 @@
         self.cert = '{}.cert'.format(name)
         self.o = o
         self.ou = ou
+        self.profile = profile
 
         self.ensure()
 
@@ -230,7 +252,7 @@
 
         logger.info("{}: Generating...".format(self))
         key, csr = self.ca.gen_key(self.hosts, o=self.o, ou=self.ou, save=self.key)
-        self.ca.sign(csr, save=self.cert)
+        self.ca.sign(csr, save=self.cert, profile=self.profile)
 
     def upload(self, c, remote_cert, remote_key, concat_ca=False):
         logger.info("Uploading Cert {} to {} & {}".format(self, remote_cert, remote_key))
diff --git a/cluster/clustercfg/clustercfg.py b/cluster/clustercfg/clustercfg.py
index 24fa745..b6d790b 100644
--- a/cluster/clustercfg/clustercfg.py
+++ b/cluster/clustercfg/clustercfg.py
@@ -57,7 +57,7 @@
 def configure_k8s(username, ca, cert, key):
     subprocess.check_call([
         'kubectl', 'config',
-        'set-cluster', cluster,
+        'set-cluster', 'admin.' + cluster,
         '--certificate-authority=' + ca,
         '--embed-certs=true',
         '--server=https://' + cluster + ':4001',
@@ -71,13 +71,13 @@
     ])
     subprocess.check_call([
         'kubectl', 'config',
-        'set-context', cluster,
-        '--cluster=' + cluster,
+        'set-context', 'admin.' + cluster,
+        '--cluster=' + 'admin.' + cluster,
         '--user=' + username,
     ])
     subprocess.check_call([
         'kubectl', 'config',
-        'use-context', cluster,
+        'use-context', 'admin.' + cluster,
     ])
 
 
@@ -86,6 +86,18 @@
         sys.stderr.write("Usage: admincreds q3k\n")
         return 1
     username = args[0]
+    print("")
+    print("WARNING WARNING WARNING WARNING WARNING WARNING")
+    print("===============================================")
+    print("")
+    print("You are requesting ADMIN credentials.")
+    print("")
+    print("You likely shouldn't be doing this, and")
+    print("instead should be using `prodaccess`.")
+    print("")
+    print("===============================================")
+    print("WARNING WARNING WARNING WARNING WARNING WARNING")
+    print("")
 
     ## Make kube certificates.
     certs_root = os.path.join(local_root, 'cluster/certs')
@@ -169,6 +181,10 @@
         ## Make kube certificates.
         ca_kube = ca.CA(ss, certs_root, 'kube', 'kubernetes main CA')
 
+        # Make prodvider intermediate CA.
+        c = ca_kube.make_cert('ca-kube-prodvider', o='Warsaw Hackerspace', ou='kubernetes prodvider intermediate', hosts=['kubernetes prodvider intermediate CA'], profile='intermediate')
+        c.ensure()
+
         # Make kubelet certificate (per node).
         c = ca_kube.make_cert('kube-kubelet-'+fqdn, o='system:nodes', ou='Kubelet', hosts=['system:node:'+fqdn, fqdn])
         c.upload_pki(r, pki_config('kube.kubelet'))
diff --git a/cluster/kube/cluster.jsonnet b/cluster/kube/cluster.jsonnet
index dab37a8..1226354 100644
--- a/cluster/kube/cluster.jsonnet
+++ b/cluster/kube/cluster.jsonnet
@@ -1,6 +1,7 @@
 # Top level cluster configuration.
 
 local kube = import "../../kube/kube.libsonnet";
+local policies = import "../../kube/policies.libsonnet";
 
 local calico = import "lib/calico.libsonnet";
 local certmanager = import "lib/cert-manager.libsonnet";
@@ -9,6 +10,7 @@
 local metallb = import "lib/metallb.libsonnet";
 local metrics = import "lib/metrics.libsonnet";
 local nginx = import "lib/nginx.libsonnet";
+local prodvider = import "lib/prodvider.libsonnet";
 local registry = import "lib/registry.libsonnet";
 local rook = import "lib/rook.libsonnet";
 
@@ -30,7 +32,7 @@
                 "rbac.authorization.kubernetes.io/autoupdate": "true",
             },
             labels+: {
-                "kubernets.io/bootstrapping": "rbac-defaults",
+                "kubernetes.io/bootstrapping": "rbac-defaults",
             },
         },
         rules: [
@@ -57,6 +59,96 @@
         ],
     },
 
+    // This ClusteRole is bound to all humans that log in via prodaccess/prodvider/SSO.
+    // It should allow viewing of non-sensitive data for debugability and openness.
+    crViewer: kube.ClusterRole("system:viewer") {
+        rules: [
+            {
+                apiGroups: [""],
+                resources: [
+                    "nodes",
+                    "namespaces",
+                    "pods",
+                    "configmaps",
+                    "services",
+                ],
+                verbs: ["list"],
+            },
+            {
+                apiGroups: ["metrics.k8s.io"],
+                resources: [
+                    "nodes",
+                    "pods",
+                ],
+                verbs: ["list"],
+            },
+            {
+                apiGroups: ["apps"],
+                resources: [
+                    "statefulsets",
+                ],
+                verbs: ["list"],
+            },
+            {
+                apiGroups: ["extensions"],
+                resources: [
+                    "deployments",
+                    "ingresses",
+                ],
+                verbs: ["list"],
+            }
+        ],
+    },
+    // This ClusterRole is applied (scoped to personal namespace) to all humans.
+    crFullInNamespace: kube.ClusterRole("system:admin-namespace") {
+        rules: [
+            {
+                apiGroups: ["*"],
+                resources: ["*"],
+                verbs: ["*"],
+            },
+        ],
+    },
+    // This ClusterRoleBindings allows root access to cluster admins.
+    crbAdmins: kube.ClusterRoleBinding("system:admins") {
+        roleRef: {
+            apiGroup: "rbac.authorization.k8s.io",
+            kind: "ClusterRole",
+            name: "cluster-admin",
+        },
+        subjects: [
+            {
+                apiGroup: "rbac.authorization.k8s.io",
+                kind: "User",
+                name: user + "@hackerspace.pl",
+            } for user in [
+                "q3k",
+                "implr",
+                "informatic",
+            ]
+        ],
+    },
+
+    podSecurityPolicies: policies.Cluster {},
+
+    allowInsecureNamespaces: [
+        policies.AllowNamespaceInsecure("kube-system"),
+        # TODO(q3k): fix this?
+        policies.AllowNamespaceInsecure("ceph-waw2"),
+    ],
+
+    // Allow all service accounts (thus all controllers) to create secure pods.
+    crbAllowServiceAccountsSecure: kube.ClusterRoleBinding("policy:allow-all-secure") {
+        roleRef_: cluster.podSecurityPolicies.secureRole,
+        subjects: [
+            {
+                kind: "Group",
+                apiGroup: "rbac.authorization.k8s.io",
+                name: "system:serviceaccounts",
+            }
+        ],
+    },
+
     // Calico network fabric
     calico: calico.Environment {},
     // CoreDNS for this cluster.
@@ -106,6 +198,9 @@
             objectStorageName: "waw-hdd-redundant-2-object",
         },
     },
+
+    // Prodvider
+    prodvider: prodvider.Environment {},
 };
 
 
diff --git a/cluster/kube/lib/cockroachdb.libsonnet b/cluster/kube/lib/cockroachdb.libsonnet
index ac4c965..212104d 100644
--- a/cluster/kube/lib/cockroachdb.libsonnet
+++ b/cluster/kube/lib/cockroachdb.libsonnet
@@ -36,6 +36,7 @@
 
 local kube = import "../../../kube/kube.libsonnet";
 local cm = import "cert-manager.libsonnet";
+local policies = import "../../../kube/policies.libsonnet";
 
 {
     Cluster(name): {
@@ -70,6 +71,8 @@
             [if cluster.cfg.ownNamespace then "ns"]: kube.Namespace(cluster.namespaceName),
         },
 
+        insecurePolicy: policies.AllowNamespaceInsecure(cluster.namespaceName),
+
         name(suffix):: if cluster.cfg.ownNamespace then suffix else name + "-" + suffix,
 
         pki: {
diff --git a/cluster/kube/lib/metallb.libsonnet b/cluster/kube/lib/metallb.libsonnet
index a56fc90..7f3d746 100644
--- a/cluster/kube/lib/metallb.libsonnet
+++ b/cluster/kube/lib/metallb.libsonnet
@@ -1,6 +1,7 @@
 # Deploy MetalLB
 
 local kube = import "../../../kube/kube.libsonnet";
+local policies = import "../../../kube/policies.libsonnet";
 
 local bindServiceAccountClusterRole(sa, cr) = kube.ClusterRoleBinding(cr.metadata.name) {
     roleRef: {
@@ -32,6 +33,8 @@
 
         ns: if cfg.namespaceCreate then kube.Namespace(cfg.namespace),
 
+        insecurePolicy: policies.AllowNamespaceInsecure(cfg.namespace),
+
         saController: kube.ServiceAccount("controller") {
             metadata+: {
                 namespace: cfg.namespace,
diff --git a/cluster/kube/lib/nginx.libsonnet b/cluster/kube/lib/nginx.libsonnet
index a871b96..ab7bbc2 100644
--- a/cluster/kube/lib/nginx.libsonnet
+++ b/cluster/kube/lib/nginx.libsonnet
@@ -1,6 +1,7 @@
 # Deploy a per-cluster Nginx Ingress Controller
 
 local kube = import "../../../kube/kube.libsonnet";
+local policies = import "../../../kube/policies.libsonnet";
 
 {
     Environment: {
@@ -21,6 +22,8 @@
 
         namespace: kube.Namespace(cfg.namespace),
 
+        allowInsecure: policies.AllowNamespaceInsecure(cfg.namespace),
+
         maps: {
             make(name):: kube.ConfigMap(name) {
                 metadata+: env.metadata,
diff --git a/cluster/kube/lib/prodvider.libsonnet b/cluster/kube/lib/prodvider.libsonnet
new file mode 100644
index 0000000..5b75c79
--- /dev/null
+++ b/cluster/kube/lib/prodvider.libsonnet
@@ -0,0 +1,85 @@
+# Deploy prodvider (prodaccess server) in cluster.
+
+local kube = import "../../../kube/kube.libsonnet";
+
+{
+    Environment: {
+        local env = self,
+        local cfg = env.cfg,
+
+        cfg:: {
+            namespace: "prodvider",
+            image: "registry.k0.hswaw.net/cluster/prodvider:1567199084-2e1c08fa7a41faac2ef3f79a1bb82f8841a68016",
+
+            pki: {
+                intermediate: {
+                    cert: importstr "../../certs/ca-kube-prodvider.cert",
+                    key: importstr "../../secrets/plain/ca-kube-prodvider.key",
+                },
+                kube: {
+                    cert: importstr "../../certs/ca-kube.crt",
+                },
+            }
+        },
+
+        namespace: kube.Namespace(cfg.namespace),
+
+        metadata(component):: {
+            namespace: cfg.namespace,
+            labels: {
+                "app.kubernetes.io/name": "prodvider",
+                "app.kubernetes.io/managed-by": "kubecfg",
+                "app.kubernetes.io/component": component,
+            },
+        },
+
+        secret: kube.Secret("ca") {
+            metadata+: env.metadata("prodvider"),
+            data_: {
+                "intermediate-ca.crt": cfg.pki.intermediate.cert,
+                "intermediate-ca.key": cfg.pki.intermediate.key,
+                "ca.crt": cfg.pki.kube.cert,
+            },
+        },
+
+        deployment: kube.Deployment("prodvider") {
+            metadata+: env.metadata("prodvider"),
+            spec+: {
+                replicas: 3,
+                template+: {
+                    spec+: {
+                        volumes_: {
+                            ca: kube.SecretVolume(env.secret),
+                        },
+                        containers_: {
+                            prodvider: kube.Container("prodvider") {
+                                image: cfg.image,
+                                args: [
+                                    "/cluster/prodvider/prodvider",
+                                    "-listen_address", "0.0.0.0:8080",
+                                    "-ca_key_path", "/opt/ca/intermediate-ca.key",
+                                    "-ca_certificate_path", "/opt/ca/intermediate-ca.crt",
+                                    "-kube_ca_certificate_path", "/opt/ca/ca.crt",
+                                ],
+                                volumeMounts_: {
+                                    ca: { mountPath: "/opt/ca" },
+                                }
+                            },
+                        },
+                    },
+                },
+            },
+        },
+
+        svc: kube.Service("prodvider") {
+            metadata+: env.metadata("prodvider"),
+            target_pod:: env.deployment.spec.template,
+            spec+: {
+                type: "LoadBalancer",
+                ports: [
+                    { name: "public", port: 443, targetPort: 8080, protocol: "TCP" },
+                ],
+            },
+        },
+    },
+}
diff --git a/cluster/kube/lib/registry.libsonnet b/cluster/kube/lib/registry.libsonnet
index 1ce022d..a791acf 100644
--- a/cluster/kube/lib/registry.libsonnet
+++ b/cluster/kube/lib/registry.libsonnet
@@ -152,11 +152,12 @@
                     },
                     local data = self,
                     pushers:: [
-                            { who: ["q3k", "inf"], what: "vms/*" },
-                            { who: ["q3k", "inf"], what: "app/*" },
-                            { who: ["q3k", "inf"], what: "go/svc/*" },
+                            { who: ["q3k", "informatic"], what: "vms/*" },
+                            { who: ["q3k", "informatic"], what: "app/*" },
+                            { who: ["q3k", "informatic"], what: "go/svc/*" },
                             { who: ["q3k"], what: "bgpwtf/*" },
                             { who: ["q3k"], what: "devtools/*" },
+                            { who: ["q3k", "informatic"], what: "cluster/*" },
                     ],
                     acl: [
                         {
diff --git a/cluster/nix/cluster-configuration.nix b/cluster/nix/cluster-configuration.nix
index 7357f14..fdfcbed 100644
--- a/cluster/nix/cluster-configuration.nix
+++ b/cluster/nix/cluster-configuration.nix
@@ -161,7 +161,7 @@
       serviceClusterIpRange = "10.10.12.0/24";
       runtimeConfig = "api/all,authentication.k8s.io/v1beta1";
       authorizationMode = ["Node" "RBAC"];
-      enableAdmissionPlugins = ["Initializers" "NamespaceLifecycle" "NodeRestriction" "LimitRanger" "ServiceAccount" "DefaultStorageClass" "ResourceQuota"];
+      enableAdmissionPlugins = ["Initializers" "NamespaceLifecycle" "NodeRestriction" "LimitRanger" "ServiceAccount" "DefaultStorageClass" "ResourceQuota" "PodSecurityPolicy"];
       extraOpts = ''
         --apiserver-count=3 \
         --proxy-client-cert-file=${pki.kubeFront.apiserver.cert} \
diff --git a/cluster/prodaccess/BUILD.bazel b/cluster/prodaccess/BUILD.bazel
new file mode 100644
index 0000000..5124ffc
--- /dev/null
+++ b/cluster/prodaccess/BUILD.bazel
@@ -0,0 +1,25 @@
+load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
+
+go_library(
+    name = "go_default_library",
+    srcs = [
+        "kubernetes.go",
+        "prodaccess.go",
+    ],
+    importpath = "code.hackerspace.pl/hscloud/cluster/prodaccess",
+    visibility = ["//visibility:private"],
+    deps = [
+        "//cluster/certs:go_default_library",
+        "//cluster/prodvider/proto:go_default_library",
+        "@com_github_golang_glog//:go_default_library",
+        "@org_golang_google_grpc//:go_default_library",
+        "@org_golang_google_grpc//credentials:go_default_library",
+        "@org_golang_x_crypto//ssh/terminal:go_default_library",
+    ],
+)
+
+go_binary(
+    name = "prodaccess",
+    embed = [":go_default_library"],
+    visibility = ["//visibility:public"],
+)
diff --git a/cluster/prodaccess/README.md b/cluster/prodaccess/README.md
new file mode 100644
index 0000000..63fbf41
--- /dev/null
+++ b/cluster/prodaccess/README.md
@@ -0,0 +1,26 @@
+prodvider
+=========
+
+It provides access, yo.
+
+Architecture
+------------
+
+Prodvider uses an intermedaite CA (the prodvider CA, signed by the kube CA), to generate the following:
+ - a cert for prodvider to present itself over gRPC for prodaccess clients
+ - a cert for prodvider to authenticate itself to the kube apiserver
+ - client certificates for prodaccess consumers.
+
+Any time someone runs 'prodaccess', they get a certificate from the intermediate CA, and the intermediate CA is included as part of the chain that they receive. They can then use this chain to authenticate against kubernetes.
+
+Naming
+------
+
+Prodvider customers get certificates with a CN=`username@hackerspace.pl` and O=`sso:username`. This means that they appear to Kubernetes as being a `User` named `username@hackerspace.pl` and `Group` named `sso:username`. In the future, another group might be given to users, do not rely on this relationship.
+
+Kubernetes Structure
+--------------------
+
+After generating a user certificate, prodvider will also call kubernetes to set up a personal user namespace (`personal-username`), a RoleBinding to `system:admin-namespace` from their `User` in their namespace (thus, giving them full rights in it) and a ClusterRoleBinding to `system:viewer` from their `User` (thus, giving them some read access for all resources, but not to secure data (like secrets).
+
+`system:admin-namespace` and `system:viewer` are defined in `//cluster/kube`.
diff --git a/cluster/prodaccess/kubernetes.go b/cluster/prodaccess/kubernetes.go
new file mode 100644
index 0000000..7226423
--- /dev/null
+++ b/cluster/prodaccess/kubernetes.go
@@ -0,0 +1,110 @@
+package main
+
+import (
+	"crypto/tls"
+	"crypto/x509"
+	"fmt"
+	"io/ioutil"
+	"os"
+	"os/exec"
+	"path"
+	"path/filepath"
+	"time"
+
+	"github.com/golang/glog"
+
+	pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
+)
+
+func kubernetesPaths() (string, string, string) {
+	localRoot := os.Getenv("hscloud_root")
+	if localRoot == "" {
+		glog.Exitf("Please source env.sh")
+	}
+
+	localKey := path.Join(localRoot, ".kubectl", fmt.Sprintf("%s.key", flagUsername))
+	localCert := path.Join(localRoot, ".kubectl", fmt.Sprintf("%s.crt", flagUsername))
+	localCA := path.Join(localRoot, ".kubectl", fmt.Sprintf("ca.crt"))
+
+	return localKey, localCert, localCA
+}
+
+func needKubernetesCreds() bool {
+	localKey, localCert, _ := kubernetesPaths()
+
+	// Check for existence of cert/key.
+	if _, err := os.Stat(localKey); os.IsNotExist(err) {
+		return true
+	}
+	if _, err := os.Stat(localCert); os.IsNotExist(err) {
+		return true
+	}
+
+	// Cert/key exist, try to load and parse.
+	creds, err := tls.LoadX509KeyPair(localCert, localKey)
+	if err != nil {
+		return true
+	}
+	if len(creds.Certificate) != 1 {
+		return true
+	}
+	cert, err := x509.ParseCertificate(creds.Certificate[0])
+	if err != nil {
+		return true
+	}
+	creds.Leaf = cert
+
+	// Check if certificate will still be valid in 2 hours.
+	target := time.Now().Add(2 * time.Hour)
+	if creds.Leaf.NotAfter.Before(target) {
+		return true
+	}
+
+	return false
+}
+
+func useKubernetesKeys(keys *pb.KubernetesKeys) {
+	localKey, localCert, localCA := kubernetesPaths()
+
+	parent := filepath.Dir(localKey)
+	if _, err := os.Stat(parent); os.IsNotExist(err) {
+		os.MkdirAll(parent, 0700)
+	}
+
+	if err := ioutil.WriteFile(localKey, keys.Key, 0600); err != nil {
+		glog.Exitf("WriteFile(%q): %v", localKey, err)
+	}
+	if err := ioutil.WriteFile(localCert, keys.Cert, 0600); err != nil {
+		glog.Exitf("WriteFile(%q): %v", localCert, err)
+	}
+	if err := ioutil.WriteFile(localCA, keys.Ca, 0600); err != nil {
+		glog.Exitf("WriteFile(%q): %v", localCA, err)
+	}
+
+	kubectl := func(args ...string) {
+		cmd := exec.Command("kubectl", args...)
+		out, err := cmd.CombinedOutput()
+		if err != nil {
+			glog.Exitf("kubectl %v: %v: %v", args, err, string(out))
+		}
+	}
+
+	kubectl("config",
+		"set-cluster", keys.Cluster,
+		"--certificate-authority="+localCA,
+		"--embed-certs=true",
+		"--server=https://"+keys.Cluster+":4001")
+
+	kubectl("config",
+		"set-credentials", flagUsername,
+		"--client-certificate="+localCert,
+		"--client-key="+localKey,
+		"--embed-certs=true")
+
+	kubectl("config",
+		"set-context", keys.Cluster,
+		"--cluster="+keys.Cluster,
+		"--user="+flagUsername)
+
+	kubectl("config", "use-context", keys.Cluster)
+}
diff --git a/cluster/prodaccess/prodaccess.go b/cluster/prodaccess/prodaccess.go
new file mode 100644
index 0000000..e0e8ec2
--- /dev/null
+++ b/cluster/prodaccess/prodaccess.go
@@ -0,0 +1,114 @@
+package main
+
+import (
+	"context"
+	"crypto/x509"
+	"flag"
+	"fmt"
+	"os"
+	"os/user"
+	"syscall"
+
+	"github.com/golang/glog"
+	"golang.org/x/crypto/ssh/terminal"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/credentials"
+
+	"code.hackerspace.pl/cluster/certs"
+	pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
+)
+
+var (
+	flagProdvider string
+	flagUsername  string
+	flagForce     bool
+)
+
+func init() {
+	flag.Set("logtostderr", "true")
+}
+
+func main() {
+	user, err := user.Current()
+	if err == nil {
+		flagUsername = user.Username
+	}
+
+	flag.StringVar(&flagProdvider, "prodvider", "prodvider.hswaw.net:443", "Prodvider endpoint")
+	flag.StringVar(&flagUsername, "username", flagUsername, "Username to authenticate with")
+	flag.BoolVar(&flagForce, "force", false, "Force retrieving certificates even if they already exist")
+	flag.Parse()
+
+	if flagUsername == "" {
+		glog.Exitf("Username could not be detected, please provide with -username flag")
+	}
+
+	cp := x509.NewCertPool()
+	if ok := cp.AppendCertsFromPEM(certs.Data["ca-kube.crt"]); !ok {
+		glog.Exitf("Could not load k8s CA")
+	}
+
+	creds := credentials.NewClientTLSFromCert(cp, "")
+	conn, err := grpc.Dial(flagProdvider, grpc.WithTransportCredentials(creds))
+	if err != nil {
+		glog.Exitf("Could not dial prodvider: %v", err)
+	}
+
+	prodvider := pb.NewProdviderClient(conn)
+	ctx := context.Background()
+
+	if !needKubernetesCreds() && !flagForce {
+		fmt.Printf("Kubernetes credentials exist. Use `prodaccess -force` to force update.\n")
+		os.Exit(0)
+	}
+
+	attempts := 0
+	for {
+		ok := authenticate(ctx, prodvider)
+		attempts += 1
+		if !ok {
+			if attempts >= 3 {
+				os.Exit(1)
+			}
+		} else {
+			fmt.Printf("Good evening professor. I see you have driven here in your Ferrari.\n")
+			os.Exit(0)
+		}
+	}
+}
+
+func authenticate(ctx context.Context, prodvider pb.ProdviderClient) bool {
+	req := &pb.AuthenticateRequest{
+		Username: flagUsername,
+		Password: password(),
+	}
+
+	res, err := prodvider.Authenticate(ctx, req)
+	if err != nil {
+		glog.Exitf("Prodvider error: %v", err)
+	}
+
+	switch res.Result {
+	case pb.AuthenticateResponse_RESULT_AUTHENTICATED:
+		break
+	case pb.AuthenticateResponse_RESULT_INVALID_CREDENTIALS:
+		fmt.Printf("Invalid username or password.\n")
+		return false
+	default:
+		glog.Exitf("Unknown authentication result: %v", res.Result)
+	}
+
+	useKubernetesKeys(res.KubernetesKeys)
+
+	return true
+}
+
+func password() string {
+	fmt.Printf("Enter SSO/LDAP password for %s@hackerspace.pl: ", flagUsername)
+	bytePassword, err := terminal.ReadPassword(int(syscall.Stdin))
+	if err != nil {
+		return ""
+	}
+	fmt.Printf("\n")
+	return string(bytePassword)
+}
diff --git a/cluster/prodvider/BUILD.bazel b/cluster/prodvider/BUILD.bazel
new file mode 100644
index 0000000..14690b7
--- /dev/null
+++ b/cluster/prodvider/BUILD.bazel
@@ -0,0 +1,64 @@
+load("@io_bazel_rules_docker//container:container.bzl", "container_image", "container_layer", "container_push")
+load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")
+
+go_library(
+    name = "go_default_library",
+    srcs = [
+        "certs.go",
+        "kubernetes.go",
+        "main.go",
+        "service.go",
+    ],
+    importpath = "code.hackerspace.pl/hscloud/cluster/prodvider",
+    visibility = ["//visibility:private"],
+    deps = [
+        "//cluster/prodvider/proto:go_default_library",
+        "@com_github_cloudflare_cfssl//config:go_default_library",
+        "@com_github_cloudflare_cfssl//csr:go_default_library",
+        "@com_github_cloudflare_cfssl//signer:go_default_library",
+        "@com_github_cloudflare_cfssl//signer/local:go_default_library",
+        "@com_github_golang_glog//:go_default_library",
+        "@in_gopkg_ldap_v3//:go_default_library",
+        "@io_k8s_api//core/v1:go_default_library",
+        "@io_k8s_api//rbac/v1:go_default_library",
+        "@io_k8s_apimachinery//pkg/api/errors:go_default_library",
+        "@io_k8s_apimachinery//pkg/apis/meta/v1:go_default_library",
+        "@io_k8s_client_go//kubernetes:go_default_library",
+        "@io_k8s_client_go//rest:go_default_library",
+        "@org_golang_google_grpc//:go_default_library",
+        "@org_golang_google_grpc//codes:go_default_library",
+        "@org_golang_google_grpc//credentials:go_default_library",
+        "@org_golang_google_grpc//status:go_default_library",
+    ],
+)
+
+go_binary(
+    name = "prodvider",
+    embed = [":go_default_library"],
+    visibility = ["//visibility:public"],
+)
+
+container_layer(
+    name = "layer_bin",
+    files = [
+        ":prodvider",
+    ],
+    directory = "/cluster/prodvider/",
+)
+
+container_image(
+    name = "runtime",
+    base = "@prodimage-bionic//image",
+    layers = [
+        ":layer_bin",
+    ],
+)
+
+container_push(
+    name = "push",
+    image = ":runtime",
+    format = "Docker",
+    registry = "registry.k0.hswaw.net",
+    repository = "cluster/prodvider",
+    tag = "{BUILD_TIMESTAMP}-{STABLE_GIT_COMMIT}",
+)
diff --git a/cluster/prodvider/certs.go b/cluster/prodvider/certs.go
new file mode 100644
index 0000000..bed0e48
--- /dev/null
+++ b/cluster/prodvider/certs.go
@@ -0,0 +1,112 @@
+package main
+
+import (
+	"crypto/tls"
+	"fmt"
+	"time"
+
+	"github.com/cloudflare/cfssl/csr"
+	"github.com/cloudflare/cfssl/signer"
+	"github.com/golang/glog"
+	"google.golang.org/grpc"
+	"google.golang.org/grpc/credentials"
+)
+
+func (p *prodvider) selfCreds() grpc.ServerOption {
+	glog.Infof("Bootstrapping certificate for self (%q)...", flagProdviderCN)
+
+	// Create a key and CSR.
+	csrPEM, keyPEM, err := p.makeSelfCSR()
+	if err != nil {
+		glog.Exitf("Could not generate key and CSR for self: %v", err)
+	}
+
+	// Create a cert
+	certPEM, err := p.makeSelfCertificate(csrPEM)
+	if err != nil {
+		glog.Exitf("Could not sign certificate for self: %v", err)
+	}
+
+	serverCert, err := tls.X509KeyPair(certPEM, keyPEM)
+	if err != nil {
+		glog.Exitf("Could not use gRPC certificate: %v", err)
+	}
+
+	signerCert, _ := p.sign.Certificate("", "")
+	serverCert.Certificate = append(serverCert.Certificate, signerCert.Raw)
+
+	return grpc.Creds(credentials.NewTLS(&tls.Config{
+		Certificates: []tls.Certificate{serverCert},
+	}))
+}
+
+func (p *prodvider) makeSelfCSR() ([]byte, []byte, error) {
+	signerCert, _ := p.sign.Certificate("", "")
+	req := &csr.CertificateRequest{
+		CN: flagProdviderCN,
+		KeyRequest: &csr.BasicKeyRequest{
+			A: "rsa",
+			S: 4096,
+		},
+		Names: []csr.Name{
+			{
+				C:  signerCert.Subject.Country[0],
+				ST: signerCert.Subject.Province[0],
+				L:  signerCert.Subject.Locality[0],
+				O:  signerCert.Subject.Organization[0],
+				OU: signerCert.Subject.OrganizationalUnit[0],
+			},
+		},
+	}
+
+	g := &csr.Generator{
+		Validator: func(req *csr.CertificateRequest) error { return nil },
+	}
+
+	return g.ProcessRequest(req)
+}
+
+func (p *prodvider) makeSelfCertificate(csr []byte) ([]byte, error) {
+	req := signer.SignRequest{
+		Hosts:   []string{},
+		Request: string(csr),
+		Profile: "server",
+	}
+	return p.sign.Sign(req)
+}
+
+func (p *prodvider) makeKubernetesCSR(username, o string) ([]byte, []byte, error) {
+	signerCert, _ := p.sign.Certificate("", "")
+	req := &csr.CertificateRequest{
+		CN: username,
+		KeyRequest: &csr.BasicKeyRequest{
+			A: "rsa",
+			S: 4096,
+		},
+		Names: []csr.Name{
+			{
+				C:  signerCert.Subject.Country[0],
+				ST: signerCert.Subject.Province[0],
+				L:  signerCert.Subject.Locality[0],
+				O:  o,
+				OU: fmt.Sprintf("Prodvider Kubernetes Cert for %s/%s", username, o),
+			},
+		},
+	}
+
+	g := &csr.Generator{
+		Validator: func(req *csr.CertificateRequest) error { return nil },
+	}
+
+	return g.ProcessRequest(req)
+}
+
+func (p *prodvider) makeKubernetesCertificate(csr []byte, notAfter time.Time) ([]byte, error) {
+	req := signer.SignRequest{
+		Hosts:    []string{},
+		Request:  string(csr),
+		Profile:  "client",
+		NotAfter: notAfter,
+	}
+	return p.sign.Sign(req)
+}
diff --git a/cluster/prodvider/kubernetes.go b/cluster/prodvider/kubernetes.go
new file mode 100644
index 0000000..3386625
--- /dev/null
+++ b/cluster/prodvider/kubernetes.go
@@ -0,0 +1,205 @@
+package main
+
+import (
+	"encoding/pem"
+	"fmt"
+	"time"
+
+	"github.com/golang/glog"
+	corev1 "k8s.io/api/core/v1"
+	rbacv1 "k8s.io/api/rbac/v1"
+	"k8s.io/apimachinery/pkg/api/errors"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/client-go/kubernetes"
+	"k8s.io/client-go/rest"
+
+	pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
+)
+
+func (p *prodvider) kubernetesCreds(username string) (*pb.KubernetesKeys, error) {
+	o := fmt.Sprintf("sso:%s", username)
+
+	csrPEM, keyPEM, err := p.makeKubernetesCSR(username+"@hackerspace.pl", o)
+	if err != nil {
+		return nil, err
+	}
+
+	certPEM, err := p.makeKubernetesCertificate(csrPEM, time.Now().Add(13*time.Hour))
+	if err != nil {
+		return nil, err
+	}
+
+	caCert, _ := p.sign.Certificate("", "")
+	caPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: caCert.Raw})
+
+	// Build certificate chain from new cert and intermediate CA.
+	chainPEM := append(certPEM, caPEM...)
+
+	glog.Infof("Generated k8s certificate for %q", username)
+	return &pb.KubernetesKeys{
+		Cluster: "k0.hswaw.net",
+		// APIServerCA
+		Ca: p.kubeCAPEM,
+		// Chain of new cert + intermediate CA
+		Cert: chainPEM,
+		Key:  keyPEM,
+	}, nil
+}
+
+func (p *prodvider) kubernetesConnect() error {
+	csrPEM, keyPEM, err := p.makeKubernetesCSR("prodvider", "system:masters")
+	if err != nil {
+		return err
+	}
+
+	certPEM, err := p.makeKubernetesCertificate(csrPEM, time.Now().Add(30*24*time.Hour))
+	if err != nil {
+		return err
+	}
+
+	caCert, _ := p.sign.Certificate("", "")
+
+	caPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: caCert.Raw})
+
+	glog.Infof("Generated k8s certificate for self (system:masters)")
+
+	// Build certificate chain from our cert and intermediate CA.
+	chainPEM := append(certPEM, caPEM...)
+
+	config := &rest.Config{
+		Host: flagKubernetesHost,
+		TLSClientConfig: rest.TLSClientConfig{
+			// Chain to authenticate ourselves (us + intermediate CA).
+			CertData: chainPEM,
+			KeyData:  keyPEM,
+			// APIServer CA for verification.
+			CAData: p.kubeCAPEM,
+		},
+	}
+
+	cs, err := kubernetes.NewForConfig(config)
+	if err != nil {
+		return err
+	}
+
+	p.k8s = cs
+
+	return nil
+}
+
+// kubernetesSetupUser ensures that for a given SSO username we:
+//  - have a personal-<username> namespace
+//  - have a sso:<username>:personal rolebinding that binds
+//    system:admin-namespace to the user within their personal namespace
+//  - have a sso:<username>:global clusterrolebinding that binds
+//    system:viewer to the user at cluster level
+func (p *prodvider) kubernetesSetupUser(username string) error {
+	namespace := "personal-" + username
+	if err := p.ensureNamespace(namespace); err != nil {
+		return err
+	}
+	if err := p.ensureRoleBindingPersonal(namespace, username); err != nil {
+		return err
+	}
+	if err := p.ensureClusterRoleBindingGlobal(username); err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func (p *prodvider) ensureNamespace(name string) error {
+	_, err := p.k8s.CoreV1().Namespaces().Get(name, metav1.GetOptions{})
+	switch {
+	case err == nil:
+		// Already exists, nothing to do
+		return nil
+	case errors.IsNotFound(err):
+		break
+	default:
+		// Something went wrong.
+		return err
+	}
+	ns := &corev1.Namespace{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: name,
+		},
+	}
+	_, err = p.k8s.CoreV1().Namespaces().Create(ns)
+	return err
+}
+
+func (p *prodvider) ensureRoleBindingPersonal(namespace, username string) error {
+	name := "sso:" + username + ":personal"
+	rb := &rbacv1.RoleBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      name,
+			Namespace: namespace,
+		},
+		Subjects: []rbacv1.Subject{
+			{
+				APIGroup: "rbac.authorization.k8s.io",
+				Kind:     "User",
+				Name:     username + "@hackerspace.pl",
+			},
+		},
+		RoleRef: rbacv1.RoleRef{
+			APIGroup: "rbac.authorization.k8s.io",
+			Kind:     "ClusterRole",
+			Name:     "system:admin-namespace",
+		},
+	}
+
+	rbs := p.k8s.RbacV1().RoleBindings(namespace)
+	_, err := rbs.Get(name, metav1.GetOptions{})
+	switch {
+	case err == nil:
+		// Already exists, update.
+		_, err = rbs.Update(rb)
+		return err
+	case errors.IsNotFound(err):
+		// Create.
+		_, err = rbs.Create(rb)
+		return err
+	default:
+		// Something went wrong.
+		return err
+	}
+}
+
+func (p *prodvider) ensureClusterRoleBindingGlobal(username string) error {
+	name := "sso:" + username + ":global"
+	rb := &rbacv1.ClusterRoleBinding{
+		ObjectMeta: metav1.ObjectMeta{
+			Name: name,
+		},
+		Subjects: []rbacv1.Subject{
+			{
+				APIGroup: "rbac.authorization.k8s.io",
+				Kind:     "User",
+				Name:     username + "@hackerspace.pl",
+			},
+		},
+		RoleRef: rbacv1.RoleRef{
+			APIGroup: "rbac.authorization.k8s.io",
+			Kind:     "ClusterRole",
+			Name:     "system:viewer",
+		},
+	}
+
+	crbs := p.k8s.RbacV1().ClusterRoleBindings()
+	_, err := crbs.Get(name, metav1.GetOptions{})
+	switch {
+	case err == nil:
+		// Already exists, update.
+		_, err = crbs.Update(rb)
+		return err
+	case errors.IsNotFound(err):
+		// Create.
+		_, err = crbs.Create(rb)
+		return err
+	default:
+		// Something went wrong.
+		return err
+	}
+}
diff --git a/cluster/prodvider/main.go b/cluster/prodvider/main.go
new file mode 100644
index 0000000..7222a86
--- /dev/null
+++ b/cluster/prodvider/main.go
@@ -0,0 +1,149 @@
+package main
+
+import (
+	"flag"
+	"io/ioutil"
+	"math/rand"
+	"net"
+	"os"
+	"time"
+
+	"github.com/cloudflare/cfssl/config"
+	"github.com/cloudflare/cfssl/signer/local"
+	"github.com/golang/glog"
+	"google.golang.org/grpc"
+	"k8s.io/client-go/kubernetes"
+
+	pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
+)
+
+var (
+	flagLDAPServer          string
+	flagLDAPBindDN          string
+	flagLDAPGroupSearchBase string
+	flagListenAddress       string
+	flagKubernetesHost      string
+
+	flagCACertificatePath     string
+	flagCAKeyPath             string
+	flagKubeCACertificatePath string
+
+	flagProdviderCN string
+)
+
+func init() {
+	flag.Set("logtostderr", "true")
+}
+
+type prodvider struct {
+	sign      *local.Signer
+	k8s       *kubernetes.Clientset
+	srv       *grpc.Server
+	kubeCAPEM []byte
+}
+
+func newProdvider() *prodvider {
+	policy := &config.Signing{
+		Profiles: map[string]*config.SigningProfile{
+			"server": &config.SigningProfile{
+				Usage:        []string{"signing", "key encipherment", "server auth"},
+				ExpiryString: "30d",
+			},
+			"client": &config.SigningProfile{
+				Usage:        []string{"signing", "key encipherment", "client auth"},
+				ExpiryString: "30d",
+			},
+			"client-server": &config.SigningProfile{
+				Usage:        []string{"signing", "key encipherment", "server auth", "client auth"},
+				ExpiryString: "30d",
+			},
+		},
+		Default: config.DefaultConfig(),
+	}
+
+	sign, err := local.NewSignerFromFile(flagCACertificatePath, flagCAKeyPath, policy)
+	if err != nil {
+		glog.Exitf("Could not create signer: %v", err)
+	}
+
+	kubeCAPEM, err := ioutil.ReadFile(flagKubeCACertificatePath)
+	if err != nil {
+		glog.Exitf("Could not read kube CA cert path: %v")
+	}
+
+	return &prodvider{
+		sign:      sign,
+		kubeCAPEM: kubeCAPEM,
+	}
+}
+
+// Timebomb restarts the prodvider after a deadline, usually 7 days +/- 4 days.
+// This is to ensure we serve with up-to-date certificates and that the service
+// can still come up after restart.
+func timebomb(srv *grpc.Server) {
+	deadline := time.Now()
+	deadline = deadline.Add(3 * 24 * time.Hour)
+	rand.Seed(time.Now().UnixNano())
+	jitter := rand.Intn(8 * 24 * 60 * 60)
+	deadline = deadline.Add(time.Duration(jitter) * time.Second)
+
+	glog.Infof("Timebomb deadline set to %v", deadline)
+
+	t := time.NewTicker(time.Minute)
+	for {
+		<-t.C
+		if time.Now().After(deadline) {
+			break
+		}
+	}
+
+	// Start killing connections, and wait one minute...
+	go srv.GracefulStop()
+	<-t.C
+	glog.Infof("Timebomb deadline exceeded, restarting.")
+	os.Exit(0)
+}
+
+func main() {
+	flag.StringVar(&flagLDAPServer, "ldap_server", "ldap.hackerspace.pl:636", "Address of LDAP server")
+	flag.StringVar(&flagLDAPBindDN, "ldap_bind_dn", "uid=%s,ou=People,dc=hackerspace,dc=pl", "LDAP Bind DN")
+	flag.StringVar(&flagLDAPGroupSearchBase, "ldap_group_search_base_dn", "ou=Group,dc=hackerspace,dc=pl", "LDAP Group Search Base DN")
+	flag.StringVar(&flagListenAddress, "listen_address", "127.0.0.1:8080", "gRPC listen address")
+	flag.StringVar(&flagKubernetesHost, "kubernetes_host", "k0.hswaw.net:4001", "Kubernetes API host")
+
+	flag.StringVar(&flagCACertificatePath, "ca_certificate_path", "", "CA certificate path (for signer)")
+	flag.StringVar(&flagCAKeyPath, "ca_key_path", "", "CA key path (for signer)")
+	flag.StringVar(&flagKubeCACertificatePath, "kube_ca_certificate_path", "", "CA certificate path (for checking kube apiserver)")
+
+	flag.StringVar(&flagProdviderCN, "prodvider_cn", "prodvider.hswaw.net", "CN of certificate that prodvider will use")
+	flag.Parse()
+
+	if flagCACertificatePath == "" || flagCAKeyPath == "" {
+		glog.Exitf("CA certificate and key must be provided")
+	}
+
+	p := newProdvider()
+	err := p.kubernetesConnect()
+	if err != nil {
+		glog.Exitf("Could not connect to kubernetes: %v", err)
+	}
+	creds := p.selfCreds()
+
+	// Start serving gRPC
+	grpcLis, err := net.Listen("tcp", flagListenAddress)
+	if err != nil {
+		glog.Exitf("Could not listen for gRPC on %q: %v", flagListenAddress, err)
+	}
+
+	glog.Infof("Starting gRPC on %q...", flagListenAddress)
+	grpcSrv := grpc.NewServer(creds)
+
+	pb.RegisterProdviderServer(grpcSrv, p)
+
+	go timebomb(grpcSrv)
+
+	err = grpcSrv.Serve(grpcLis)
+	if err != nil {
+		glog.Exitf("Could not serve gRPC: %v", err)
+	}
+}
diff --git a/cluster/prodvider/proto/BUILD.bazel b/cluster/prodvider/proto/BUILD.bazel
new file mode 100644
index 0000000..2efd457
--- /dev/null
+++ b/cluster/prodvider/proto/BUILD.bazel
@@ -0,0 +1,23 @@
+load("@io_bazel_rules_go//go:def.bzl", "go_library")
+load("@io_bazel_rules_go//proto:def.bzl", "go_proto_library")
+
+proto_library(
+    name = "proto_proto",
+    srcs = ["prodvider.proto"],
+    visibility = ["//visibility:public"],
+)
+
+go_proto_library(
+    name = "proto_go_proto",
+    compilers = ["@io_bazel_rules_go//proto:go_grpc"],
+    importpath = "code.hackerspace.pl/hscloud/cluster/prodvider/proto",
+    proto = ":proto_proto",
+    visibility = ["//visibility:public"],
+)
+
+go_library(
+    name = "go_default_library",
+    embed = [":proto_go_proto"],
+    importpath = "code.hackerspace.pl/hscloud/cluster/prodvider/proto",
+    visibility = ["//visibility:public"],
+)
diff --git a/cluster/prodvider/proto/prodvider.proto b/cluster/prodvider/proto/prodvider.proto
new file mode 100644
index 0000000..1ae2798
--- /dev/null
+++ b/cluster/prodvider/proto/prodvider.proto
@@ -0,0 +1,29 @@
+syntax = "proto3";
+package prodvider;
+option go_package = "code.hackerspace.pl/hscloud/cluster/prodvider/proto";
+
+message AuthenticateRequest {
+    string username = 1;
+    string password = 2;
+}
+
+message AuthenticateResponse {
+    enum Result {
+        RESULT_INVALID = 0;
+        RESULT_AUTHENTICATED = 1;
+        RESULT_INVALID_CREDENTIALS = 2;
+    }
+    Result result = 1;
+    KubernetesKeys kubernetes_keys = 2;
+}
+
+message KubernetesKeys {
+    string cluster = 1;
+    bytes ca = 2;
+    bytes cert = 3;
+    bytes key = 4;
+}
+
+service Prodvider {
+    rpc Authenticate(AuthenticateRequest) returns (AuthenticateResponse);
+}
diff --git a/cluster/prodvider/service.go b/cluster/prodvider/service.go
new file mode 100644
index 0000000..5635ac2
--- /dev/null
+++ b/cluster/prodvider/service.go
@@ -0,0 +1,104 @@
+package main
+
+import (
+	"context"
+	"crypto/tls"
+	"fmt"
+	"regexp"
+	"strings"
+
+	"github.com/golang/glog"
+	"google.golang.org/grpc/codes"
+	"google.golang.org/grpc/status"
+	ldap "gopkg.in/ldap.v3"
+
+	pb "code.hackerspace.pl/hscloud/cluster/prodvider/proto"
+)
+
+var (
+	reUsername = regexp.MustCompile(`^[a-zA-Z0-9_\.]+$`)
+)
+
+func (p *prodvider) Authenticate(ctx context.Context, req *pb.AuthenticateRequest) (*pb.AuthenticateResponse, error) {
+	username := strings.TrimSpace(req.Username)
+	if username == "" || !reUsername.MatchString(username) {
+		return nil, status.Error(codes.InvalidArgument, "invalid username")
+	}
+
+	password := req.Password
+	if password == "" {
+		return &pb.AuthenticateResponse{
+			Result: pb.AuthenticateResponse_RESULT_INVALID_CREDENTIALS,
+		}, nil
+	}
+
+	tlsConfig := &tls.Config{}
+	lconn, err := ldap.DialTLS("tcp", flagLDAPServer, tlsConfig)
+	if err != nil {
+		glog.Errorf("ldap.DialTLS: %v", err)
+		return nil, status.Error(codes.Unavailable, "could not context LDAP")
+	}
+
+	dn := fmt.Sprintf(flagLDAPBindDN, username)
+	err = lconn.Bind(dn, password)
+
+	if err != nil {
+		if ldap.IsErrorWithCode(err, ldap.LDAPResultInvalidCredentials) {
+			return &pb.AuthenticateResponse{
+				Result: pb.AuthenticateResponse_RESULT_INVALID_CREDENTIALS,
+			}, nil
+		}
+
+		glog.Errorf("ldap.Bind: %v", err)
+		return nil, status.Error(codes.Unavailable, "could not query LDAP")
+	}
+
+	groups, err := p.groupMemberships(lconn, username)
+	if err != nil {
+		return nil, err
+	}
+
+	if !groups["kubernetes-users"] && !groups["staff"] {
+		return nil, status.Error(codes.PermissionDenied, "not part of staff or kubernetes-users")
+	}
+
+	err = p.kubernetesSetupUser(username)
+	if err != nil {
+		glog.Errorf("kubernetesSetupUser(%v): %v", username, err)
+		return nil, status.Error(codes.Unavailable, "could not set up objects in Kubernetes")
+	}
+
+	keys, err := p.kubernetesCreds(username)
+	if err != nil {
+		glog.Errorf("kubernetesCreds(%q): %v", username, err)
+		return nil, status.Error(codes.Unavailable, "could not generate k8s keys")
+	}
+	return &pb.AuthenticateResponse{
+		Result:         pb.AuthenticateResponse_RESULT_AUTHENTICATED,
+		KubernetesKeys: keys,
+	}, nil
+}
+
+func (p *prodvider) groupMemberships(lconn *ldap.Conn, username string) (map[string]bool, error) {
+	searchRequest := ldap.NewSearchRequest(
+		flagLDAPGroupSearchBase,
+		ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false,
+		fmt.Sprintf("(uniqueMember=%s)", fmt.Sprintf(flagLDAPBindDN, username)),
+		[]string{"dn", "cn"},
+		nil,
+	)
+
+	sr, err := lconn.Search(searchRequest)
+	if err != nil {
+		glog.Errorf("ldap.Search: %v", err)
+		return nil, status.Error(codes.Unavailable, "could not query LDAP for group")
+	}
+
+	res := make(map[string]bool)
+	for _, entry := range sr.Entries {
+		cn := entry.GetAttributeValue("cn")
+		res[cn] = true
+	}
+
+	return res, nil
+}
diff --git a/cluster/secrets/cipher/ca-kube-prodvider.key b/cluster/secrets/cipher/ca-kube-prodvider.key
new file mode 100644
index 0000000..bc0eb71
--- /dev/null
+++ b/cluster/secrets/cipher/ca-kube-prodvider.key
@@ -0,0 +1,91 @@
+-----BEGIN PGP MESSAGE-----
+
+hQEMAzhuiT4RC8VbAQf/cvVV9ecXc3sZ5wn7wIes0q6HEzZiYtkZHgFmFfRaVCRg
+QZIEn1AtuUyxJwLyaICODLv9fNhodVgyO03kRWQBCAbNpGzI+WnCpiJ41/tc2lKA
+g85mtzRtl84zFmvBZNvkZGVSW1OUKDLS4577GME3XYerdPAGzCmEtNuhMiBvAWZ5
+HBrCjtgOmOXvYiClS8tMbc6XJ09Jf6qqMzqHtqKjUsJ2RCYKAteL5vYIKTHjxZE4
+eM9NcdpzXmU3kzJMNLmPiqmyJ7tnwY4KwTlnb1xC4XSOVYWgug4LwZ3lpYkfw3h9
+Z5mt0sCW8zdOSDqAad3QoUjWSHoMGv8piRTd2HnkdIUBDANcG2tp6fXqvgEIAJOU
+e5q1Z08sy3EavJxOob0UjqUKxu/TlJu/tiIh7iqQRLKvC9ZrIMf8tB9RfLx9kogd
+O5GX1LOGHgFfSIdGSU3vw8JgxlRM1OGvsFW2UBmY0RUOAdSHpT1lU7cxuPjQoi+A
+OV+aTHBxTPNZgtS5ivANSoz6E8cdSn/yQxXUKAVf2OAThgLiJ5h6CW740g/sLh/J
+yY457gE+YBcJSQASafR8En+4+xnQ3ol20bmIqyTeSd01hxqeBBX8eOi1aRaQP+nI
+Br+lX4U1ePdzITmNHe+kGMA+Sm2gDqIt6dMvbMg2KmYHSzZTWUHluTToB003jchD
+tzszK/8PdHZFlTYOAjmFAgwDodoT8VqRl4UBEACmVka2DL2atNio1lUVkf8hdaTX
+0B8M6fcHUE6mG5krRT4O+rW61ZDd+UEeBoDYC+D4GBlaYKvv3HHIMsgjxaWNzh/x
+eyLS6JD0bPeFMtvStGwmdcL0ZY4aExYz8L0NIuLc7GGyZurdo6j9FRA/GujLH8Em
+zqL3WujVdbpCHla5eb9HxRGYTEQsOdsbVWeY6a/iYI1KQm4n5Jxgc6wv1JoWvyi5
+rn55ZzqiN/D6MdViIdIilqXC/A50m4ph6kqhV+PUQQLafhxU0i9tNBe1ZU8CdpCm
+6u7whg9WlBz3NawIly+5ZTKVh67ZLN144BfzMLc0+HB5X94FoeXvCQaehAYNUAsn
+GSwJPYpLzgpSk+tjixIV94vCbbNtmNNXvG/XY7FM3lWlPeooa7UKkf+Gn0qrZB06
+baiQyB+0dBnA1Q+lJuhVAJhedHM2xB2ZJVSVggQvw47HM8ovuoGNrQQFUYaEWtpn
+iQtEfAfd4iIXSBcmFcSZmUqyY4APAC2gk5Pv20ptrW1JXHOkD7AFvGMakRf/qfx/
+DTTaVPzi5W9ILiEkIaVFSXSuiFpySp6KTMl+Khnx3MRAx2K5uLob82uLcBjCSUcY
+hTM1iRpyJUFdGnj6KTPLHEEWCwHSwVVg7YkrQhq8+bHoxFc0yexXxrNR3FgluDKB
+RGkXYehHqKhELfKqAIUCDAPiA8lOXOuz7wEP/j2JHSS4rwdhKUJy9QEvvrhuyz6T
+KrvjoMzWZ1MUzUb8Sl4OFwQAxZuuJ/Lh1oWOzt3ZYguKJDOnheF4hWKCobgwYyE/
+6pHTisx8ZUAywLVbN9imsFaHA/Vpt+WxM4As0h08CGiUfODfy73SW0KaHMb4J4Jm
+MKv3B3Uzeyr3Ma+tIOeO7J60FG3GRoUpjDKq4Wl8a9JWxOt8BRnsyt6vzUx+uy3P
+XRjVE0t1+mEctwJtC561jzbKbCXieVHPtpSL9RIjVQsMm6ZoanCf81xxLT4Y6RmA
+9sjTOxE3zR6LDprl7zVt1nAFlAmtzS17qCXb4lDMjOaz3N7QCZxtU4RYgMi4S52h
+HN3BGRLX5k1YrP7ppRaXhu4r4G1mKW2wvT+lL3nar3e4wihMy3G0tkkaJWZuKRzF
+Vuxz2KsgrjXqr2F2v92I8m5r6bCO438WM2dU8yLWpw8bggbLz8fFqwSUcT1gRjJZ
+sPvDytoj9e0QDmDt8r41LTeDoZ/Dxn507H7SM/zt0OBv2Gc88lqY87ZMNHAoGyye
+u06CZ0KNgbAe97B6PDSbIfxgxZNFmzBsGyRAW4IuEHRkscL0gnnbQLWn6c6IhivN
+Ak596hkSa3pQT2fggi8YVOTnI6aaNr1/bkj7s4JMK8QLyaOoOffkhseRCSgXY1pB
+SXpSTW5KFH0yVWJV0usBzlVpqOiYxScXyop7xqK3JI2Tox3tJZqPdVqx2o1CTCAp
+aJOZTei+HCNuVKbScm7AJdJ53A7a3t5IpWQbgHObhHBttKCWKNi4IowFUsfVR40m
+Ekazsq4gWq5wJeEpxKHfB2l89g4ZWaW07hAz64+7sMgPu3/cBvoZeY076tbMJQwb
+KOkiP2k05r5x3+N4U6hJkfHq4EVA8hEm0X0KlXo6g1Muej0doGFcqZYTS8PVSm4q
+2AlwARZ4a8s5lWTrhjFlMosyvSfOKZlhl6T9xNfennWeiYiBZezr16uJEmDEjWqo
+vKiqfenMR74CweKCysmDQqw5nsAncphyMjIasAcuxfhQKif3lb0Car4CemYeY5lb
+30SaohQpDe4w/o06wk81rPWbSa9szDV/pH3msuVQBvBcyxdrbIXRuxjFXP1u82iG
+Y7U08DehkG7uTS8OwY0HJIQlUqGB+C1f0ASGFCGCk2UkTrL1pHc7bCpFuOZFIOM1
+/slb7fsHARa/CwpFiM1EhtDrGptqwtJ25Ymwc27BKXCXKtG6D7hfYPy2N4L7OqQB
+z2Am4Rz2iZyGHkuRutN8flBweLm0vN7v3zvb5sxhf1W5gnw2QWvNtOfZJQwlCEi9
+qaOZcJXDfbG2Tq7ArC5kJaugCxT22jdeLvWcD4SsUd1BHL6uBMCUQfplYOlcH6at
+/GBGM72QBpngIWcfqWITzkqWbXynfsxGrQUvnjvvcc1bJMq2DhpJgoH1DSNBAxQa
+1OS/4Vpuh/7zTlvFofl98Y1fEnRNvgJgps/Udzgi4R4ntjopnBGcEGAcY/7U58Ay
+0jOXqjlS9vS89sH752UBXVdAbGXHAO22j090+uXUvXCAsoUASvpEOeyKhK784K+2
+6LlA3uJ7CX5r3zoRRq6rx/YsyJRWNiolFo9oPGZpfnc9bb5PAL/NLOiUxp1ERcun
+p3gcrA7+5ZX/cJYFWkeEa10rswl9LcGX7LmdCYYemL21gbW0VhTIu4Arew9RvTlJ
+qEnXvSsLxg6nePY0ZqB50lAEQ/PYEBdwPjj/6dnSqhSaQXalWTPHqVz0SamaW4/0
+u7KcOXKUqGf/3tRQhcdFXP3e8kFrSYEgLJ9BdQgnMWs2T7B7hFeUEiXHIn0R4cRN
+knoXvhew1IggyK9IgjCyvSlejmFnJ+v1jmR9N9UJ8l3j2ei/t9zDhHAf18975171
+3vt1B0UjQlYhCesd/DdLI7OaKwPsjBG5U6IEz3P2wYsmHTfaNRwi8fPKfgBktQz1
+sPHzSKEXJjyEfPhlWdC5nldCj+tMJPi1FWuv4JpN7aOSzIVhahROk8TDyYJzkrlE
+ct1S3VhED4EzB64QSXuJbKbfbvL/m1Y0yWhta+FajBMRySCXpuG/xWP6qa2RMlq3
+uD2fWktc3Joen3ME7bfuI602dH8gwzTE7MvPOmz2G7EvwSTccvYZ1XSceYXjmmJQ
+nxUtZfH8N8W3+I7X/S9QJh7sIdqf/V7Xs6oqocnb8FdQOGxp8LHbDkq1sLcYtAsD
+edqIbzhdOvzdD0SfOenOxX+1t6ZlFaQCc2al/yC3xPNPW71kv6MZehqBIJ+IpfoD
+6b7NCiRi37/fxBV05EgZz8tNg8OouEjM/+HbxT11+4yZ1Ytyo85syeER0pYF9xly
+ibZLZo1ZznfHVDH+nJ3GI0rAaYCGobbg5tlxRqjT+NWscqTAz6sxAA/MwPKCg047
+vqyTIKEdmbqb0JTmwOH6F5VIdXOfU2wxpT0ZM1Vqaj5CkgHkbBU3KxCxraiG0Vks
+GJp2TqQRiSLNJH+qQuTdl+C67Wz09g0M/R8xii5xnBXcAGGYQzObMyFFaFMZlJZX
+MBDqbgI4uzL+0DYF7yHMFzCnj8LtsYwGfaaQqdY8yHT/zCBLj5+wVAi3BMoq1lXD
+KIfivdDXyMzdFGaouI3m+/7n3OCiH9KAh3banpoX+Vgy26u2aZVMrDcdSKqXfmQt
+R9RdRsPs4412ianWHwSZBVWEYNrXru0umWPV06FsCF1Ghu2LZTTxooPRiGOhIZ8v
+h0/kR111evpt25mYsqPpjovoVKKAJFc/odFhwcpRt7mh8WmArm6E6w2wv7F+Mza1
+gGuJgYe9RxKKMuJ1G4iOMShoHzL/Gvj+jEf1hxKnei/GZtI+KCncC1/sw9T0L2mL
+IzasXiSFPZ/w/QcDP7BqvmjGNwxrg37kjxaB8tvCQPWSHEjQa8npsK+cf43N+SbU
+39qKFtFkfS/UbU3yW8UNc9Y1S9Mh3QokRi8msHGI/bHgumKffLU9GmBRWmP1bxMj
+fw3he9SpkcVTNaHIG9/a4a5jsNOFEPAPu9B6DPNRjJtPw4NEi6RCPCs64MLfpUmu
+ouU3VuOo1x/6mhwJSlPjWIxw8mF/s/ofhoTf9Dtjl6ajEXkJWYMqewFj06cQ/fah
+7hr8g6MX3lYM3i38UryEtR9s4yK4wFIW/yU5pNA+lPWRTqDEA/a7TtG7LznGeoE+
+C7Ef9Ek66yr4592JKvxCgmUK03q4xwqcwL0s8KZ7e0BKZQxYMnJk8F+5UIjNUZQI
+Q/4Haf3zrsKl4vmiktJhGF29Z/CGhJIcbFabq73aWMKwl6ciihYmsTSKqh1wFhtP
+YmUridrZg062JVFedwtymyJ4K59Kbg4y52NZuj3FwkpqSaL8qSElq6tcaDs2U0HM
+mbivSXxa/he51cIKvEORK9vUae+JlrtbF00s6vtqjv8pvfGam/Gh/2eq2f7BdsE7
+LCRdC+OFnyMtsJHCCZ79c9Jx0URroahtJzsThAtra5eaJrEUGlw6dGfKnbHWCihp
+W3Z+wJS1URy1HPzL4C5f/XWkRoQBTM2Dbe5brRWskE/ZQTWort1qA1IaevZ0cuUA
+CjIk623F2l3AdDMEJhqVkZTSWRMud8lcLDgLwZo3aNlbA2RuQCTvNYDC2q9V08R/
+kdkRKE2W3Bqgqs9MxJ/ljnCuARffZlliAEjk9eZFea9KDwoxWK3IhsaokwP8p5dT
+9A/VXOOObovV/PN2VMjpWBxCHBc4w90X8c2hDSExVwqz2Bz1VsY7+G6H38DCnusO
+QDdGaERQBinOOxSxu6ZcEBdvAXD8LsaSShhHRY4b1wKMkKu3VNNGPwUqEateksXq
+xVN+Do3nvFDlKtALAhrmoUZPojbighdMYMw6lUO5flm8AiuD7lExs7d2m9My3SlL
+LVUcY7Ths0fi1q+LGPTeBY/3ZUydfESE0nrhoX+3+CPsVF7h9VztZB/FxAeg1eU7
+nAq+tkPqeZkOt+vWFAcv71EiU25Rv6K1W59kbpGYhjAQy208S2pqdwaQhPOA48OT
+xUdLZbNowRzWHClCOodVybbCuIaGYj1mJtKdxsHZZ9mmLwLkWoHcsjLPse7/TcWm
+Z93hDF6HQaWlS/dhm7HaUbvEXmwd38dTq+sQ
+=GZCJ
+-----END PGP MESSAGE-----
diff --git a/kube/policies.libsonnet b/kube/policies.libsonnet
new file mode 100644
index 0000000..e8e7aed
--- /dev/null
+++ b/kube/policies.libsonnet
@@ -0,0 +1,121 @@
+local kube = import "kube.libsonnet";
+
+{
+    local policies = self,
+
+    policyNameAllowInsecure: "policy:allow-insecure",
+    policyNameAllowSecure: "policy:allow-secure",
+
+    Cluster: {
+        insecure: kube._Object("policy/v1beta1", "PodSecurityPolicy", "insecure") {
+            spec: {
+                privileged: true,
+                allowPrivilegeEscalation: true,
+                allowedCapabilities: ['*'],
+                volumes: ['*'],
+                hostNetwork: true,
+                hostIPC: true,
+                hostPID: true,
+                runAsUser: {
+                    rule: 'RunAsAny',
+                },
+                seLinux: {
+                    rule: 'RunAsAny',
+                },
+                supplementalGroups: {
+                    rule: 'RunAsAny',
+                },
+                fsGroup: {
+                    rule: 'RunAsAny',
+                },
+            },
+        },
+        insecureRole: kube.ClusterRole(policies.policyNameAllowInsecure) {
+            rules: [
+                {
+                    apiGroups: ['policy'],
+                    resources: ['podsecuritypolicies'],
+                    verbs: ['use'],
+                    resourceNames: ['insecure'],
+                }
+            ],
+        },
+        secure: kube._Object("policy/v1beta1", "PodSecurityPolicy", "secure") {
+            spec: {
+                privileged: false,
+                # Required to prevent escalations to root.
+                allowPrivilegeEscalation: false,
+                # This is redundant with non-root + disallow privilege escalation,
+                # but we can provide it for defense in depth.
+                requiredDropCapabilities: ["ALL"],
+                # Allow core volume types.
+                volumes: [
+                    'configMap',
+                    'emptyDir',
+                    'projected',
+                    'secret',
+                    'downwardAPI',
+                    'persistentVolumeClaim',
+                ],
+                hostNetwork: false,
+                hostIPC: false,
+                hostPID: false,
+                runAsUser: {
+                    # Allow to run as root - docker, we trust you here.
+                    rule: 'RunAsAny',
+                },
+                seLinux: {
+                    rule: 'RunAsAny',
+                },
+                supplementalGroups: {
+                    rule: 'MustRunAs',
+                    ranges: [
+                        {
+                            # Forbid adding the root group.
+                            min: 1,
+                            max: 65535,
+                        }
+                    ],
+                },
+                fsGroup: {
+                    rule: 'MustRunAs',
+                    ranges: [
+                        {
+                            # Forbid adding the root group.
+                            min: 1,
+                            max: 65535,
+                        }
+                    ],
+                },
+                readOnlyRootFilesystem: false,
+            },
+        },
+        secureRole: kube.ClusterRole(policies.policyNameAllowSecure) {
+            rules: [
+                {
+                    apiGroups: ['policy'],
+                    resources: ['podsecuritypolicies'],
+                    verbs: ['use'],
+                    resourceNames: ['secure'],
+                },
+            ],
+        },
+    },
+
+    # Allow insecure access to all service accounts in a given namespace.
+    AllowNamespaceInsecure(namespace): {
+        rb: kube.RoleBinding("policy:allow-insecure-in-" + namespace) {
+            metadata+: {
+                namespace: namespace,
+            },
+            roleRef_: policies.Cluster.insecureRole,
+            subjects: [
+                {
+                    kind: "Group",
+                    apiGroup: "rbac.authorization.k8s.io",
+                    name: "system:serviceaccounts",
+                }
+            ],
+        },
+    },
+}
diff --git a/tools/BUILD b/tools/BUILD
index e21744a..64faf53 100644
--- a/tools/BUILD
+++ b/tools/BUILD
@@ -20,3 +20,9 @@
     srcs = ["pass.py"],
     visibility = ["//visibility:public"],
 )
+
+copy_go_binary(
+    name = "prodaccess",
+    src = "//cluster/prodaccess:prodaccess",
+    visibility = ["//visibility:public"],
+)