Sergiusz Bazanski | de06180 | 2019-01-13 21:14:02 +0100 | [diff] [blame] | 1 | HSCloud Clusters |
| 2 | ================ |
| 3 | |
| 4 | Current cluster: `k0.hswaw.net` |
| 5 | |
| 6 | Accessing via kubectl |
| 7 | --------------------- |
| 8 | |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 9 | prodaccess # get a short-lived certificate for your use via SSO |
Sergiusz Bazanski | 58d0859 | 2020-02-15 00:58:47 +0100 | [diff] [blame^] | 10 | # if youre local username is not the same as your HSWAW SSO |
| 11 | # username, pass `-username foo` |
Sergiusz Bazanski | 13bb1bf | 2019-08-31 16:33:29 +0200 | [diff] [blame] | 12 | kubectl version |
| 13 | kubectl top nodes |
| 14 | |
| 15 | Every user gets a `personal-$username` namespace. Feel free to use it for your own purposes, but watch out for resource usage! |
Sergiusz Bazanski | de06180 | 2019-01-13 21:14:02 +0100 | [diff] [blame] | 16 | |
Sergiusz Bazanski | 58d0859 | 2020-02-15 00:58:47 +0100 | [diff] [blame^] | 17 | kubectl run -n personal-$username run --image=alpine:latest -it foo |
Sergiusz Bazanski | de06180 | 2019-01-13 21:14:02 +0100 | [diff] [blame] | 18 | |
Sergiusz Bazanski | 58d0859 | 2020-02-15 00:58:47 +0100 | [diff] [blame^] | 19 | To proceed further you should be somewhat familiar with Kubernetes. Otherwise the rest of terminology might not make sense. We recommend going through the original Kubernetes tutorials. |
| 20 | |
| 21 | Persistent Storage (waw2) |
| 22 | ------------------------- |
| 23 | |
| 24 | HDDs on bc01n0{1-3}. 3TB total capacity. Don't use this as this pool should go away soon (the disks are slow, the network is slow and the RAID controllers lie). Use ceph-waw3 instead. |
Sergiusz Bazanski | 2fd5861 | 2019-04-02 14:45:17 +0200 | [diff] [blame] | 25 | |
| 26 | The following storage classes use this cluster: |
| 27 | |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 28 | - `waw-hdd-paranoid-1` - 3 replicas |
Sergiusz Bazanski | 2fd5861 | 2019-04-02 14:45:17 +0200 | [diff] [blame] | 29 | - `waw-hdd-redundant-1` - erasure coded 2.1 |
Sergiusz Bazanski | 36cc4fb | 2019-05-17 18:08:48 +0200 | [diff] [blame] | 30 | - `waw-hdd-yolo-1` - unreplicated (you _will_ lose your data) |
Piotr Dobrowolski | 5691823 | 2019-04-09 23:48:33 +0200 | [diff] [blame] | 31 | - `waw-hdd-redundant-1-object` - erasure coded 2.1 object store |
Sergiusz Bazanski | 2fd5861 | 2019-04-02 14:45:17 +0200 | [diff] [blame] | 32 | |
Sergiusz Bazanski | 13bb1bf | 2019-08-31 16:33:29 +0200 | [diff] [blame] | 33 | Rados Gateway (S3) is available at https://object.ceph-waw2.hswaw.net/. To create a user, ask an admin. |
Sergiusz Bazanski | 2fd5861 | 2019-04-02 14:45:17 +0200 | [diff] [blame] | 34 | |
Sergiusz Bazanski | 58d0859 | 2020-02-15 00:58:47 +0100 | [diff] [blame^] | 35 | PersistentVolumes currently bound to PersistentVolumeClaims get automatically backed up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year). |
| 36 | |
| 37 | Persistent Storage (waw3) |
| 38 | ------------------------- |
| 39 | |
| 40 | HDDs on dcr01s2{2,4}. 40TB total capacity for now. Use this. |
| 41 | |
| 42 | The following storage classes use this cluster: |
| 43 | |
| 44 | - `waw-hdd-yolo-3` - 1 replica |
| 45 | - `waw-hdd-redundant-3` - 2 replicas |
| 46 | - `waw-hdd-redundant-3-object` - 2 replicas, object store |
| 47 | |
| 48 | Rados Gateway (S3) is available at https://object.ceph-waw3.hswaw.net/. To create a user, ask an admin. |
| 49 | |
| 50 | PersistentVolumes currently bound to PVCs get automatically backed up (hourly for the next 48 hours, then once every 4 weeks, then once every month for a year). |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 51 | |
| 52 | Administration |
| 53 | ============== |
| 54 | |
| 55 | Provisioning nodes |
| 56 | ------------------ |
| 57 | |
Sergiusz Bazanski | 58d0859 | 2020-02-15 00:58:47 +0100 | [diff] [blame^] | 58 | - bring up a new node with nixos, the configuration doesn't matter and will be nuked anyway |
| 59 | - edit cluster/nix/defs-machines.nix |
Sergiusz Bazanski | 5f9b1ec | 2019-09-22 02:19:18 +0200 | [diff] [blame] | 60 | - `bazel run //cluster/clustercfg nodestrap bc01nXX.hswaw.net` |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 61 | |
Sergiusz Bazanski | 13bb1bf | 2019-08-31 16:33:29 +0200 | [diff] [blame] | 62 | Ceph - Debugging |
| 63 | ----------------- |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 64 | |
| 65 | We run Ceph via Rook. The Rook operator is running in the `ceph-rook-system` namespace. To debug Ceph issues, start by looking at its logs. |
| 66 | |
Sergiusz Bazanski | 13bb1bf | 2019-08-31 16:33:29 +0200 | [diff] [blame] | 67 | A dashboard is available at https://ceph-waw2.hswaw.net/, to get the admin password run: |
| 68 | |
| 69 | kubectl -n ceph-waw2 get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode ; echo |
| 70 | |
| 71 | |
| 72 | Ceph - Backups |
| 73 | -------------- |
| 74 | |
| 75 | Kubernetes PVs backed in Ceph RBDs get backed up using Benji. An hourly cronjob runs in every Ceph cluster. You can also manually trigger a run by doing: |
| 76 | |
| 77 | kubectl -n ceph-waw2 create job --from=cronjob/ceph-waw2-benji ceph-waw2-benji-manual-$(date +%s) |
| 78 | |
| 79 | Ceph ObjectStorage pools (RADOSGW) are _not_ backed up yet! |
| 80 | |
| 81 | Ceph - Object Storage |
| 82 | --------------------- |
| 83 | |
| 84 | To create an object store user consult rook.io manual (https://rook.io/docs/rook/v0.9/ceph-object-store-user-crd.html) |
| 85 | User authentication secret is generated in ceph cluster namespace (`ceph-waw2`), |
| 86 | thus may need to be manually copied into application namespace. (see |
| 87 | `app/registry/prod.jsonnet` comment) |
| 88 | |
| 89 | `tools/rook-s3cmd-config` can be used to generate test configuration file for s3cmd. |
| 90 | Remember to append `:default-placement` to your region name (ie. `waw-hdd-redundant-1-object:default-placement`) |
Sergiusz Bazanski | b13b7ff | 2019-08-29 20:12:24 +0200 | [diff] [blame] | 91 | |