Compare commits

..

12 Commits

Author SHA1 Message Date
0377c40a07 chore: cleanup gitea actions workflows (#451)
- migrated workflows to woodpeckerci

Reviewed-on: #451
2026-02-28 17:50:41 +11:00
8bb40dadce feat: add woodpecker ci jobs (#450)
- pre-commit job to run pre-commit against

Reviewed-on: #450
2026-02-28 17:30:23 +11:00
bc769aa1df feat: add ldap groups for kubernetes/vault (#449)
need to separate the permissions inside vault into different groups, one
per-permission.

- add group for each kubernetes role in vault

Reviewed-on: #449
2026-02-14 19:22:26 +11:00
4e652ccbe6 chore: add alt-names to consul (#448)
- ensure consul datacenter is added to altnames

Reviewed-on: #448
2026-02-09 01:03:20 +11:00
8c24c6582f feat: manage vault version (#446)
- add params for version and package name
- add param to cleanup openbao
- add version lock (if not latest)

Reviewed-on: #446
2026-02-08 22:26:22 +11:00
6bfc63ca31 feat: enable plugins for vault/openbao (#447)
- install openbao-plugins
- add plugin_directory

Reviewed-on: #447
2026-02-08 19:19:33 +11:00
69dc9e8f66 docs: add docs for cephfs (#445)
- specifically related to managing csi volumes for kubernetes

Reviewed-on: #445
2026-02-03 19:56:14 +11:00
c4d28d52bc chore: remove helm deploys from puppet (#444)
- migrate helm deployments to terraform

Reviewed-on: #444
2026-01-30 20:52:51 +11:00
6219855fb1 chore: add additional user (#443)
- as per request

Reviewed-on: #443
2026-01-26 20:21:10 +11:00
7215a6f534 chore: terraform state too large for body (#442)
- update consul/nginx max body size to 512MB

Reviewed-on: #442
2026-01-18 17:15:08 +11:00
88efdbcdd3 chore: reduce synced repos (#441)
- remove repos now available via artifactapi

Reviewed-on: #441
2026-01-17 17:12:44 +11:00
3c114371e0 chore: docs for ceph (#440)
- add maintenance mode, how to bootstrap an osd, remove an osd

Reviewed-on: #440
2026-01-17 13:26:44 +11:00
14 changed files with 249 additions and 434 deletions

View File

@ -1,24 +0,0 @@
name: Build
on:
pull_request:
jobs:
precommit:
runs-on: almalinux-8
container:
image: git.unkin.net/unkin/almalinux9-actionsdind:latest
options: --privileged
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install requirements
run: |
dnf groupinstall -y "Development Tools" -y
dnf install rubygems ruby-devel gcc make redhat-rpm-config glibc-headers glibc-devel -y
- name: Pre-Commit All Files
run: |
uvx pre-commit run --all-files

View File

@ -0,0 +1,10 @@
when:
- event: pull_request
steps:
- name: pre-commit
image: git.unkin.net/unkin/almalinux9-base:latest
commands:
- dnf groupinstall -y "Development Tools" -y
- dnf install uv rubygems ruby-devel gcc make redhat-rpm-config glibc-headers glibc-devel libffi libffi-devel -y
- uvx pre-commit run --all-files

View File

@ -28,6 +28,98 @@ Always refer back to the official documentation at https://docs.ceph.com/en/late
sudo ceph fs set mediafs max_mds 2
```
## managing cephfs with subvolumes
Create erasure code profiles. The K and M values are equivalent to the number of data disks (K) and parity disks (M) in RAID5, RAID6, etc.
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
Create data pools using the erasure-code-profile, set some required options
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
Add the pool to the fs `cephfs`
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
Create a subvolumegroup using the new data pool
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
All together:
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
Create a key with access to the new subvolume groups. Check if the user already exists first:
sudo ceph auth get client.kubernetes-cephfs
If it doesnt:
sudo ceph auth get-or-create client.kubernetes-cephfs \
mgr 'allow rw' \
osd 'allow rw tag cephfs metadata=cephfs, allow rw tag cephfs data=cephfs' \
mds 'allow r fsname=cephfs path=/volumes, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_6_2, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_4_1' \
mon 'allow r fsname=cephfs'
If it does, use `sudo ceph auth caps client.kubernetes-cephfs ...` instead to update existing capabilities.
## removing a cephfs subvolumegroup from cephfs
This will cleanup the subvolumegroup, and subvolumes if they exist, then remove the pool.
Check for subvolumegroups first, then for subvolumes in it
sudo ceph fs subvolumegroup ls cephfs
sudo ceph fs subvolume ls cephfs --group_name csi_raid6
If subvolumes exist, remove each one-by-one:
sudo ceph fs subvolume rm cephfs <subvol_name> --group_name csi_raid6
If you have snapshots, remove snapshots first:
sudo ceph fs subvolume snapshot ls cephfs <subvol_name> --group_name csi_raid6
sudo ceph fs subvolume snapshot rm cephfs <subvol_name> <snap_name> --group_name csi_raid6
Once the group is empty, remove it:
sudo ceph fs subvolumegroup rm cephfs csi_raid6
If it complains its not empty, go back as theres still a subvolume or snapshot.
If you added it with `ceph fs add_data_pool`. Undo with `rm_data_pool`:
sudo ceph fs rm_data_pool cephfs cephfs_data_csi_raid6
After its detached from CephFS, you can delete it.
sudo ceph osd pool rm cephfs_data_csi_raid6 cephfs_data_csi_raid6 --yes-i-really-really-mean-it
## creating authentication tokens
- this will create a client keyring named media
@ -58,3 +150,78 @@ this will overwrite the current capabilities of a given client.user
mon 'allow r' \
mds 'allow rw path=/' \
osd 'allow rw pool=media_data'
## adding a new osd on new node
create the ceph conf (automate this?)
cat <<EOF | sudo tee /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
fsid = de96a98f-3d23-465a-a899-86d3d67edab8
mon_allow_pool_delete = true
mon_initial_members = prodnxsr0009,prodnxsr0010,prodnxsr0011,prodnxsr0012,prodnxsr0013
mon_host = 198.18.23.9,198.18.23.10,198.18.23.11,198.18.23.12,198.18.23.13
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_crush_chooseleaf_type = 1
osd_pool_default_min_size = 2
osd_pool_default_size = 3
osd_pool_default_pg_num = 128
public_network = 198.18.23.1/32,198.18.23.2/32,198.18.23.3/32,198.18.23.4/32,198.18.23.5/32,198.18.23.6/32,198.18.23.7/32,198.18.23.8/32,198.18.23.9/32,198.18.23.10/32,198.18.23.11/32,198.18.23.12/32,198.18.23.13/32
EOF
ssh to one of the monitor hosts, then transfer the keys required
sudo cat /etc/ceph/ceph.client.admin.keyring | ssh prodnxsr0003 'sudo tee /etc/ceph/ceph.client.admin.keyring'
sudo cat /var/lib/ceph/bootstrap-osd/ceph.keyring | ssh prodnxsr0003 'sudo tee /var/lib/ceph/bootstrap-osd/ceph.keyring'
assuming we are adding /dev/sda to the cluster, first zap the disk to remove partitions/lvm/metadata
sudo ceph-volume lvm zap /dev/sda --destroy
then add it to the cluster
sudo ceph-volume lvm create --data /dev/sda
## removing an osd
check what OSD IDs were on this host (if you know it)
sudo ceph osd tree
or check for any DOWN osds
sudo ceph osd stat
sudo ceph health detail
once you identify the old OSD ID, remove it with these steps, replace X with the actual OSD ID:
sudo ceph osd out osd.X
sudo ceph osd down osd.X
sudo ceph osd crush remove osd.X
sudo ceph auth del osd.X
sudo ceph osd rm osd.X
## maintenance mode for the cluster
from one node in the cluster disable recovery
sudo ceph osd set noout
sudo ceph osd set nobackfill
sudo ceph osd set norecover
sudo ceph osd set norebalance
sudo ceph osd set nodown
sudo ceph osd set pause
to undo the change, use unset
sudo ceph osd unset noout
sudo ceph osd unset nobackfill
sudo ceph osd unset norecover
sudo ceph osd unset norebalance
sudo ceph osd unset nodown
sudo ceph osd unset pause

View File

@ -66,6 +66,9 @@ glauth::users:
- 20025 # jupyterhub_admin
- 20026 # jupyterhub_user
- 20027 # grafana_user
- 20028 # k8s/au/syd1 operator
- 20029 # k8s/au/syd1 admin
- 20030 # k8s/au/syd1 root
loginshell: '/bin/bash'
homedir: '/home/benvin'
passsha256: 'd2434f6b4764ef75d5b7b96a876a32deedbd6aa726a109c3f32e823ca66f604a'
@ -223,6 +226,24 @@ glauth::users:
loginshell: '/bin/bash'
homedir: '/home/debvin'
passsha256: 'cdac05ddb02e665d4ea65a974995f38a10236bc158731d92d78f6cde89b294a1'
jassol:
user_name: 'jassol'
givenname: 'Jason'
sn: 'Solomon'
mail: 'jassol@users.main.unkin.net'
uidnumber: 20010
primarygroup: 20000
othergroups:
- 20010 # jelly
- 20011 # sonarr
- 20012 # radarr
- 20013 # lidarr
- 20014 # readarr
- 20016 # nzbget
- 20027 # grafana user
loginshell: '/bin/bash'
homedir: '/home/jassol'
passsha256: 'd8e215d3c94b954e1318c9c7243ce72713f2fb1d006037724fe857c1fb7e88e9'
glauth::services:
svc_jellyfin:
@ -367,3 +388,12 @@ glauth::groups:
grafana_user:
group_name: 'grafana_user'
gidnumber: 20027
kubernetes_au_syd1_cluster_operator:
group_name: 'kubernetes_au_syd1_cluster_operator'
gidnumber: 20028
kubernetes_au_syd1_cluster_admin:
group_name: 'kubernetes_au_syd1_cluster_admin'
gidnumber: 20029
kubernetes_au_syd1_cluster_root:
group_name: 'kubernetes_au_syd1_cluster_root'
gidnumber: 20030

View File

@ -3,9 +3,6 @@
rke2::node_type: server
rke2::helm_install: true
rke2::helm_repos:
rancher-stable: https://releases.rancher.com/server-charts/stable
purelb: https://gitlab.com/api/v4/projects/20400619/packages/helm/stable
jetstack: https://charts.jetstack.io
harbor: https://helm.goharbor.io
traefik: https://traefik.github.io/charts
hashicorp: https://helm.releases.hashicorp.com

View File

@ -3,125 +3,6 @@ profiles::packages::include:
createrepo: {}
profiles::reposync::repos_list:
almalinux_9.7_baseos:
repository: 'baseos'
description: 'AlmaLinux 9.7 BaseOS'
osname: 'almalinux'
release: '9.7'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.7/baseos'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.7_appstream:
repository: 'appstream'
description: 'AlmaLinux 9.7 AppStream'
osname: 'almalinux'
release: '9.7'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.7/appstream'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.7_crb:
repository: 'crb'
description: 'AlmaLinux 9.7 CRB'
osname: 'almalinux'
release: '9.7'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.7/crb'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.7_ha:
repository: 'ha'
description: 'AlmaLinux 9.7 HighAvailability'
osname: 'almalinux'
release: '9.7'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.7/highavailability'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.7_extras:
repository: 'extras'
description: 'AlmaLinux 9.7 extras'
osname: 'almalinux'
release: '9.7'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.7/extras'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.6_baseos:
repository: 'baseos'
description: 'AlmaLinux 9.6 BaseOS'
osname: 'almalinux'
release: '9.6'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.6/baseos'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.6_appstream:
repository: 'appstream'
description: 'AlmaLinux 9.6 AppStream'
osname: 'almalinux'
release: '9.6'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.6/appstream'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.6_crb:
repository: 'crb'
description: 'AlmaLinux 9.6 CRB'
osname: 'almalinux'
release: '9.6'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.6/crb'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.6_ha:
repository: 'ha'
description: 'AlmaLinux 9.6 HighAvailability'
osname: 'almalinux'
release: '9.6'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.6/highavailability'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9.6_extras:
repository: 'extras'
description: 'AlmaLinux 9.6 extras'
osname: 'almalinux'
release: '9.6'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.6/extras'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9_5_baseos:
repository: 'baseos'
description: 'AlmaLinux 9.5 BaseOS'
osname: 'almalinux'
release: '9.5'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.5/baseos'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9_5_appstream:
repository: 'appstream'
description: 'AlmaLinux 9.5 AppStream'
osname: 'almalinux'
release: '9.5'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.5/appstream'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9_5_crb:
repository: 'crb'
description: 'AlmaLinux 9.5 CRB'
osname: 'almalinux'
release: '9.5'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.5/crb'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9_5_ha:
repository: 'ha'
description: 'AlmaLinux 9.5 HighAvailability'
osname: 'almalinux'
release: '9.5'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.5/highavailability'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
almalinux_9_5_extras:
repository: 'extras'
description: 'AlmaLinux 9.5 extras'
osname: 'almalinux'
release: '9.5'
mirrorlist: 'https://mirrors.almalinux.org/mirrorlist/9.5/extras'
gpgkey: 'http://mirror.aarnet.edu.au/pub/almalinux/RPM-GPG-KEY-AlmaLinux-9'
epel_8:
repository: 'everything'
description: 'EPEL8'
osname: 'epel'
release: '8'
mirrorlist: 'https://mirrors.fedoraproject.org/mirrorlist?repo=epel-8&arch=x86_64'
gpgkey: 'https://epel.mirror.digitalpacific.com.au/RPM-GPG-KEY-EPEL-8'
epel_9:
repository: 'everything'
description: 'EPEL9'
osname: 'epel'
release: '9'
mirrorlist: 'https://mirrors.fedoraproject.org/mirrorlist?repo=epel-9&arch=x86_64'
gpgkey: 'https://epel.mirror.digitalpacific.com.au/RPM-GPG-KEY-EPEL-9'
docker_stable_el8:
repository: 'stable'
description: 'Docker CE Stable EL8'
@ -136,34 +17,6 @@ profiles::reposync::repos_list:
release: 'el9'
baseurl: 'https://download.docker.com/linux/centos/9/x86_64/stable/'
gpgkey: 'https://download.docker.com/linux/centos/gpg'
frr_stable_el8:
repository: 'stable'
description: 'FRR Stable EL8'
osname: 'frr'
release: 'el8'
baseurl: 'https://rpm.frrouting.org/repo/el8/frr/'
gpgkey: 'https://packagerepo.service.consul/frr/gpg/RPM-GPG-KEY-FRR'
frr_extras_el8:
repository: 'extras'
description: 'FRR Extras EL8'
osname: 'frr'
release: 'el8'
baseurl: 'https://rpm.frrouting.org/repo/el8/extras/'
gpgkey: 'https://packagerepo.service.consul/frr/gpg/RPM-GPG-KEY-FRR'
frr_stable_el9:
repository: 'stable'
description: 'FRR Stable EL9'
osname: 'frr'
release: 'el9'
baseurl: 'https://rpm.frrouting.org/repo/el9/frr/'
gpgkey: 'https://packagerepo.service.consul/frr/gpg/RPM-GPG-KEY-FRR'
frr_extras_el9:
repository: 'extras'
description: 'FRR Extras el9'
osname: 'frr'
release: 'el9'
baseurl: 'https://rpm.frrouting.org/repo/el9/extras/'
gpgkey: 'https://packagerepo.service.consul/frr/gpg/RPM-GPG-KEY-FRR'
k8s_1.32:
repository: '1.32'
description: 'Kubernetes 1.32'
@ -178,62 +31,6 @@ profiles::reposync::repos_list:
release: '1.33'
baseurl: 'https://pkgs.k8s.io/core:/stable:/v1.33/rpm/'
gpgkey: 'https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key'
mariadb_11_8_el8:
repository: 'el8'
description: 'MariaDB 11.8'
osname: 'mariadb'
release: '11.8'
baseurl: 'http://mariadb.mirror.digitalpacific.com.au/yum/11.8/rhel8-amd64/'
gpgkey: 'https://mariadb.mirror.digitalpacific.com.au/yum/RPM-GPG-KEY-MariaDB'
mariadb_11_8_el9:
repository: 'el9'
description: 'MariaDB 11.8'
osname: 'mariadb'
release: '11.8'
baseurl: 'http://mariadb.mirror.digitalpacific.com.au/yum/11.8/rhel9-amd64/'
gpgkey: 'https://mariadb.mirror.digitalpacific.com.au/yum/RPM-GPG-KEY-MariaDB'
openvox7_el8:
repository: '8'
description: 'openvox 7 EL8'
osname: 'openvox7'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox7/el/8/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
openvox7_el9:
repository: '9'
description: 'openvox 7 EL9'
osname: 'openvox7'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox7/el/9/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
openvox7_el10:
repository: '10'
description: 'openvox 7 EL10'
osname: 'openvox7'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox7/el/10/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
openvox8_el8:
repository: '8'
description: 'openvox 8 EL8'
osname: 'openvox8'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox8/el/8/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
openvox8_el9:
repository: '9'
description: 'openvox 8 EL9'
osname: 'openvox8'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox8/el/9/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
openvox8_el10:
repository: '10'
description: 'openvox 8 EL10'
osname: 'openvox8'
release: 'el'
baseurl: 'https://yum.voxpupuli.org/openvox8/el/10/x86_64/'
gpgkey: 'https://yum.voxpupuli.org/GPG-KEY-openvox.pub'
puppet7_el8:
repository: '8'
description: 'Puppet 7 EL8'
@ -262,76 +59,6 @@ profiles::reposync::repos_list:
release: 'el'
baseurl: 'https://yum.puppet.com/puppet8/el/9/x86_64/'
gpgkey: 'https://yum.puppet.com/RPM-GPG-KEY-puppet-20250406'
postgresql_rhel8_common:
repository: 'common'
description: 'PostgreSQL Common RHEL 8'
osname: 'postgresql'
release: 'rhel8'
baseurl: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-8-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel9_common:
repository: 'common'
description: 'PostgreSQL Common RHEL 9'
osname: 'postgresql'
release: 'rhel9'
baseurl: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-9-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel8_15:
repository: '15'
description: 'PostgreSQL 15 RHEL 8'
osname: 'postgresql'
release: 'rhel8'
baseurl: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-8-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel9_15:
repository: '15'
description: 'PostgreSQL 15 RHEL 9'
osname: 'postgresql'
release: 'rhel9'
baseurl: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-9-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel8_16:
repository: '16'
description: 'PostgreSQL 16 RHEL 8'
osname: 'postgresql'
release: 'rhel8'
baseurl: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-8-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel9_16:
repository: '16'
description: 'PostgreSQL 16 RHEL 9'
osname: 'postgresql'
release: 'rhel9'
baseurl: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-9-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel8_17:
repository: '17'
description: 'PostgreSQL 17 RHEL 8'
osname: 'postgresql'
release: 'rhel8'
baseurl: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-8-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
postgresql_rhel9_17:
repository: '17'
description: 'PostgreSQL 17 RHEL 9'
osname: 'postgresql'
release: 'rhel9'
baseurl: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-9-x86_64/'
gpgkey: 'https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-RHEL'
rke2_common_el9:
repository: 'common'
description: 'RKE2 common RHEL 9'
osname: 'rke2'
release: "rhel9"
baseurl: "https://rpm.rancher.io/rke2/latest/common/centos/9/noarch"
gpgkey: "https://rpm.rancher.io/public.key"
rke2_1_33_el9:
repository: '1.33'
description: 'RKE2 1.33 RHEL 9'
osname: 'rke2'
release: "rhel9"
baseurl: "https://rpm.rancher.io/rke2/latest/1.33/centos/9/x86_64"
gpgkey: "https://rpm.rancher.io/public.key"
zfs_dkms_rhel8:
repository: 'dkms'
description: 'ZFS DKMS RHEL 8'

View File

@ -29,6 +29,7 @@ profiles::consul::server::acl:
profiles::pki::vault::alt_names:
- consul.main.unkin.net
- consul.service.consul
- "consul.service.%{facts.country}-%{facts.region}.consul"
- consul
# manage a simple nginx reverse proxy
@ -38,6 +39,7 @@ profiles::nginx::simpleproxy::nginx_aliases:
- consul.main.unkin.net
profiles::nginx::simpleproxy::proxy_port: 8500
profiles::nginx::simpleproxy::proxy_path: '/'
nginx::client_max_body_size: 512M
# consul
profiles::consul::client::node_rules:

View File

@ -2,10 +2,12 @@
profiles::vault::server::members_role: roles::infra::storage::vault
profiles::vault::server::members_lookup: true
profiles::vault::server::data_dir: /data/vault
profiles::vault::server::plugin_dir: /opt/openbao-plugins
profiles::vault::server::manage_storage_dir: true
profiles::vault::server::tls_disable: false
vault::package_name: openbao
vault::package_ensure: latest
profiles::vault::server::package_name: openbao
profiles::vault::server::package_ensure: 2.4.4
profiles::vault::server::disable_openbao: false
# additional altnames
profiles::pki::vault::alt_names:
@ -23,3 +25,6 @@ profiles::nginx::simpleproxy::proxy_scheme: 'http'
profiles::nginx::simpleproxy::proxy_host: '127.0.0.1'
profiles::nginx::simpleproxy::proxy_port: 8200
profiles::nginx::simpleproxy::proxy_path: '/'
profiles::packages::include:
openbao-plugins: {}

View File

@ -1,23 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rancher
namespace: cattle-system
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts: [rancher.main.unkin.net]
secretName: tls-rancher
rules:
- host: rancher.main.unkin.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rancher
port:
number: 80

View File

@ -1,45 +0,0 @@
apiVersion: purelb.io/v1
kind: LBNodeAgent
metadata:
name: common
namespace: purelb
spec:
local:
extlbint: kube-lb0
localint: default
sendgarp: false
---
apiVersion: purelb.io/v1
kind: LBNodeAgent
metadata:
name: dmz
namespace: purelb
spec:
local:
extlbint: kube-lb0
localint: default
sendgarp: false
---
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
name: dmz
namespace: purelb
spec:
local:
v4pools:
- subnet: 198.18.199.0/24
pool: 198.18.199.0/24
aggregation: /32
---
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
name: common
namespace: purelb
spec:
local:
v4pools:
- subnet: 198.18.200.0/24
pool: 198.18.200.0/24
aggregation: /32

View File

@ -68,30 +68,6 @@ class rke2::config (
# on the controller nodes only
if $node_type == 'server' and $facts['k8s_masters'] and $facts['k8s_masters'] > 2 {
# wait for purelb helm to setup namespace
if 'purelb' in $facts['k8s_namespaces'] {
file {'/var/lib/rancher/rke2/server/manifests/purelb-config.yaml':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
source => 'puppet:///modules/rke2/purelb-config.yaml',
require => Service['rke2-server'],
}
}
# wait for rancher helm to setup namespace
if 'cattle-system' in $facts['k8s_namespaces'] {
file {'/var/lib/rancher/rke2/server/manifests/ingress-route-rancher.yaml':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
source => 'puppet:///modules/rke2/ingress-route-rancher.yaml',
require => Service['rke2-server'],
}
}
# manage extra config config (these are not dependent on helm)
$extra_config_files.each |$file| {

View File

@ -38,44 +38,6 @@ class rke2::helm (
}
}
}
# install specific helm charts to bootstrap environment
$plb_cmd = 'helm install purelb purelb/purelb \
--create-namespace \
--namespace=purelb \
--repository-config /etc/helm/repositories.yaml'
exec { 'install_purelb':
command => $plb_cmd,
path => ['/usr/bin', '/bin'],
environment => ['KUBECONFIG=/etc/rancher/rke2/rke2.yaml'],
unless => 'helm list -n purelb | grep -q ^purelb',
}
$cm_cmd = 'helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true \
--repository-config /etc/helm/repositories.yaml'
exec { 'install_cert_manager':
command => $cm_cmd,
path => ['/usr/bin', '/bin'],
environment => ['KUBECONFIG=/etc/rancher/rke2/rke2.yaml'],
unless => 'helm list -n cert-manager | grep -q ^cert-manager',
}
$r_cmd = 'helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--create-namespace \
--set hostname=rancher.main.unkin.net \
--set bootstrapPassword=admin \
--set ingress.tls.source=secret \
--repository-config /etc/helm/repositories.yaml'
exec { 'install_rancher':
command => $r_cmd,
path => ['/usr/bin', '/bin'],
environment => ['KUBECONFIG=/etc/rancher/rke2/rke2.yaml'],
unless => 'helm list -n cattle-system | grep -q ^rancher',
}
}
}
}

View File

@ -1,7 +1,7 @@
# rke2 params
class rke2::params (
Enum['server', 'agent'] $node_type = 'agent',
String $rke2_version = '1.33.7',
String $rke2_version = '1.33.4',
String $rke2_release = 'rke2r1',
Stdlib::Absolutepath $config_file = '/etc/rancher/rke2/config.yaml',
Hash $config_hash = {},

View File

@ -6,11 +6,15 @@ class profiles::vault::server (
Undef
] $members_role = undef,
Array $vault_servers = [],
String $package_name = 'vault',
String $package_ensure = 'latest',
Boolean $disable_openbao = true,
Boolean $tls_disable = false,
Stdlib::Port $client_port = 8200,
Stdlib::Port $cluster_port = 8201,
Boolean $manage_storage_dir = false,
Stdlib::Absolutepath $data_dir = '/opt/vault',
Stdlib::Absolutepath $plugin_dir = '/opt/vault_plugins',
Stdlib::Absolutepath $bin_dir = '/usr/bin',
Stdlib::Absolutepath $ssl_crt = '/etc/pki/tls/vault/certificate.crt',
Stdlib::Absolutepath $ssl_key = '/etc/pki/tls/vault/private.key',
@ -51,7 +55,33 @@ class profiles::vault::server (
}
}
# cleanup openbao?
if $disable_openbao {
package {'openbao':
ensure => absent,
before => Class['vault']
}
package {'openbao-vault-compat':
ensure => absent,
before => [
Class['vault'],
Package['openbao']
]
}
}
# add versionlock for package_name?
if $package_ensure != 'latest' {
yum::versionlock{$package_name:
ensure => present,
version => $package_ensure,
before => Class['vault']
}
}
class { 'vault':
package_name => $package_name,
package_ensure => $package_ensure,
manage_service => false,
manage_storage_dir => $manage_storage_dir,
enable_ui => true,
@ -65,6 +95,7 @@ class profiles::vault::server (
api_addr => "${http_scheme}://${::facts['networking']['fqdn']}:${client_port}",
extra_config => {
cluster_addr => "${http_scheme}://${::facts['networking']['fqdn']}:${cluster_port}",
plugin_directory => $plugin_dir,
},
listener => [
{