Compare commits

..

1 Commits

Author SHA1 Message Date
03094712d5 feat: deploy ceph
- cleanup subnet_facts, add transit links
- cleanup role::ceph::*
- add openstack-ceph module
- add ceph-mon profile
2025-05-15 07:14:04 +10:00
365 changed files with 2272 additions and 8160 deletions

View File

@ -8,6 +8,3 @@ Style/Documentation:
Layout/LineLength: Layout/LineLength:
Max: 140 Max: 140
Metrics/BlockNesting:
Max: 4

View File

@ -1,10 +0,0 @@
when:
- event: pull_request
steps:
- name: pre-commit
image: git.unkin.net/unkin/almalinux9-base:latest
commands:
- dnf groupinstall -y "Development Tools" -y
- dnf install uv rubygems ruby-devel gcc make redhat-rpm-config glibc-headers glibc-devel libffi libffi-devel -y
- uvx pre-commit run --all-files

View File

@ -19,7 +19,6 @@ mod 'puppetlabs-haproxy', '8.2.0'
mod 'puppetlabs-java', '11.1.0' mod 'puppetlabs-java', '11.1.0'
mod 'puppetlabs-reboot', '5.1.0' mod 'puppetlabs-reboot', '5.1.0'
mod 'puppetlabs-docker', '10.2.0' mod 'puppetlabs-docker', '10.2.0'
mod 'puppetlabs-mailalias_core', '1.2.0'
# puppet # puppet
mod 'puppet-python', '7.4.0' mod 'puppet-python', '7.4.0'
@ -44,8 +43,6 @@ mod 'puppet-letsencrypt', '11.1.0'
mod 'puppet-rundeck', '9.2.0' mod 'puppet-rundeck', '9.2.0'
mod 'puppet-redis', '11.1.0' mod 'puppet-redis', '11.1.0'
mod 'puppet-nodejs', '11.0.0' mod 'puppet-nodejs', '11.0.0'
mod 'puppet-postfix', '5.1.0'
mod 'puppet-alternatives', '6.0.0'
# other # other
mod 'saz-sudo', '9.0.2' mod 'saz-sudo', '9.0.2'
@ -63,7 +60,8 @@ mod 'rehan-mkdir', '2.0.0'
mod 'tailoredautomation-patroni', '2.0.0' mod 'tailoredautomation-patroni', '2.0.0'
mod 'ssm-crypto_policies', '0.3.3' mod 'ssm-crypto_policies', '0.3.3'
mod 'thias-sysctl', '1.0.8' mod 'thias-sysctl', '1.0.8'
mod 'cirrax-dovecot', '1.3.3' mod 'openstack-ceph', '7.0.0'
mod 'bind', mod 'bind',
:git => 'https://git.service.au-syd1.consul/unkinben/puppet-bind.git', :git => 'https://git.service.au-syd1.consul/unkinben/puppet-bind.git',

View File

@ -28,98 +28,6 @@ Always refer back to the official documentation at https://docs.ceph.com/en/late
sudo ceph fs set mediafs max_mds 2 sudo ceph fs set mediafs max_mds 2
``` ```
## managing cephfs with subvolumes
Create erasure code profiles. The K and M values are equivalent to the number of data disks (K) and parity disks (M) in RAID5, RAID6, etc.
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
Create data pools using the erasure-code-profile, set some required options
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
Add the pool to the fs `cephfs`
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
Create a subvolumegroup using the new data pool
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
All together:
sudo ceph osd erasure-code-profile set ec_6_2 k=6 m=2
sudo ceph osd pool create cephfs_data_ssd_ec_6_2 erasure ec_6_2
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_6_2 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_6_2
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_6_2 --pool_layout cephfs_data_ssd_ec_6_2
sudo ceph osd erasure-code-profile set ec_4_1 k=4 m=1
sudo ceph osd pool create cephfs_data_ssd_ec_4_1 erasure ec_4_1
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 allow_ec_overwrites true
sudo ceph osd pool set cephfs_data_ssd_ec_4_1 bulk true
sudo ceph fs add_data_pool cephfs cephfs_data_ssd_ec_4_1
sudo ceph fs subvolumegroup create cephfs csi_ssd_ec_4_1 --pool_layout cephfs_data_ssd_ec_4_1
Create a key with access to the new subvolume groups. Check if the user already exists first:
sudo ceph auth get client.kubernetes-cephfs
If it doesnt:
sudo ceph auth get-or-create client.kubernetes-cephfs \
mgr 'allow rw' \
osd 'allow rw tag cephfs metadata=cephfs, allow rw tag cephfs data=cephfs' \
mds 'allow r fsname=cephfs path=/volumes, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_6_2, allow rws fsname=cephfs path=/volumes/csi_ssd_ec_4_1' \
mon 'allow r fsname=cephfs'
If it does, use `sudo ceph auth caps client.kubernetes-cephfs ...` instead to update existing capabilities.
## removing a cephfs subvolumegroup from cephfs
This will cleanup the subvolumegroup, and subvolumes if they exist, then remove the pool.
Check for subvolumegroups first, then for subvolumes in it
sudo ceph fs subvolumegroup ls cephfs
sudo ceph fs subvolume ls cephfs --group_name csi_raid6
If subvolumes exist, remove each one-by-one:
sudo ceph fs subvolume rm cephfs <subvol_name> --group_name csi_raid6
If you have snapshots, remove snapshots first:
sudo ceph fs subvolume snapshot ls cephfs <subvol_name> --group_name csi_raid6
sudo ceph fs subvolume snapshot rm cephfs <subvol_name> <snap_name> --group_name csi_raid6
Once the group is empty, remove it:
sudo ceph fs subvolumegroup rm cephfs csi_raid6
If it complains its not empty, go back as theres still a subvolume or snapshot.
If you added it with `ceph fs add_data_pool`. Undo with `rm_data_pool`:
sudo ceph fs rm_data_pool cephfs cephfs_data_csi_raid6
After its detached from CephFS, you can delete it.
sudo ceph osd pool rm cephfs_data_csi_raid6 cephfs_data_csi_raid6 --yes-i-really-really-mean-it
## creating authentication tokens ## creating authentication tokens
- this will create a client keyring named media - this will create a client keyring named media
@ -150,78 +58,3 @@ this will overwrite the current capabilities of a given client.user
mon 'allow r' \ mon 'allow r' \
mds 'allow rw path=/' \ mds 'allow rw path=/' \
osd 'allow rw pool=media_data' osd 'allow rw pool=media_data'
## adding a new osd on new node
create the ceph conf (automate this?)
cat <<EOF | sudo tee /etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
fsid = de96a98f-3d23-465a-a899-86d3d67edab8
mon_allow_pool_delete = true
mon_initial_members = prodnxsr0009,prodnxsr0010,prodnxsr0011,prodnxsr0012,prodnxsr0013
mon_host = 198.18.23.9,198.18.23.10,198.18.23.11,198.18.23.12,198.18.23.13
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_crush_chooseleaf_type = 1
osd_pool_default_min_size = 2
osd_pool_default_size = 3
osd_pool_default_pg_num = 128
public_network = 198.18.23.1/32,198.18.23.2/32,198.18.23.3/32,198.18.23.4/32,198.18.23.5/32,198.18.23.6/32,198.18.23.7/32,198.18.23.8/32,198.18.23.9/32,198.18.23.10/32,198.18.23.11/32,198.18.23.12/32,198.18.23.13/32
EOF
ssh to one of the monitor hosts, then transfer the keys required
sudo cat /etc/ceph/ceph.client.admin.keyring | ssh prodnxsr0003 'sudo tee /etc/ceph/ceph.client.admin.keyring'
sudo cat /var/lib/ceph/bootstrap-osd/ceph.keyring | ssh prodnxsr0003 'sudo tee /var/lib/ceph/bootstrap-osd/ceph.keyring'
assuming we are adding /dev/sda to the cluster, first zap the disk to remove partitions/lvm/metadata
sudo ceph-volume lvm zap /dev/sda --destroy
then add it to the cluster
sudo ceph-volume lvm create --data /dev/sda
## removing an osd
check what OSD IDs were on this host (if you know it)
sudo ceph osd tree
or check for any DOWN osds
sudo ceph osd stat
sudo ceph health detail
once you identify the old OSD ID, remove it with these steps, replace X with the actual OSD ID:
sudo ceph osd out osd.X
sudo ceph osd down osd.X
sudo ceph osd crush remove osd.X
sudo ceph auth del osd.X
sudo ceph osd rm osd.X
## maintenance mode for the cluster
from one node in the cluster disable recovery
sudo ceph osd set noout
sudo ceph osd set nobackfill
sudo ceph osd set norecover
sudo ceph osd set norebalance
sudo ceph osd set nodown
sudo ceph osd set pause
to undo the change, use unset
sudo ceph osd unset noout
sudo ceph osd unset nobackfill
sudo ceph osd unset norecover
sudo ceph osd unset norebalance
sudo ceph osd unset nodown
sudo ceph osd unset pause

View File

@ -29,21 +29,3 @@ these steps are required when adding additional puppet masters, as the subject a
sudo systemctl start puppetserver sudo systemctl start puppetserver
sudo cp /root/current_crl.pem /etc/puppetlabs/puppet/ssl/crl.pem sudo cp /root/current_crl.pem /etc/puppetlabs/puppet/ssl/crl.pem
## troubleshooting
### Issue 1:
[sysadmin@ausyd1nxvm2056 ~]$ sudo puppet agent -t
Error: The CRL issued by 'CN=Puppet CA: prodinf01n01.main.unkin.net' is missing
Find another puppetserver that IS working, copy the `/etc/puppetlabs/puppet/ssl/crl.pem` to this host, run puppet again.
### Issue 2:
[sysadmin@ausyd1nxvm2097 ~]$ sudo puppet agent -t
Error: Failed to parse CA certificates as PEM
The puppet-agents CA cert `/etc/puppetlabs/puppet/ssl/certs/ca.pem` is empty or missing. Grab it from any other host. Run puppet again.

View File

@ -1,6 +1,6 @@
--- ---
profiles::accounts::sysadmin::password: ENC[PKCS7,MIIBqQYJKoZIhvcNAQcDoIIBmjCCAZYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAoS7GyofFaXBNTWU+GtSiz4eCX/9j/sh3fDDRgOgNv1qpcQ87ZlTTenbHo9lxeURxKQ2HVVt7IsrBo/SC/WgipAKnliRkkIvo7nfAs+i+kEE8wakjAs0DcB4mhqtIZRuBkLG2Nay//DcG6cltVkbKEEKmKLMkDFZgTWreOZal8nDljpVe1S8QwtwP4/6hKTef5xsOnrisxuffWTXvwYJhj/VXrjdoH7EhtHGLybzEalglkVHEGft/WrrD/0bwJpmR0RegWI4HTsSvGiHgvf5DZJx8fXPZNPnicGtlfA9ccQPuVo17bY4Qf/WIc1A8Ssv4kHSbNIYJKRymI3UFb0Z4wzBsBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBBxDLb6pCGbittkcX6asd/gEBmMcUNupDjSECq5H09YA70eVwWWe0fBqxTxrr2cXCXtRKFvOk8SJmL0xHAWodaLN9+krTWHJcWbAK8JXEPC7rn] profiles::accounts::sysadmin::password: ENC[PKCS7,MIIBqQYJKoZIhvcNAQcDoIIBmjCCAZYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAoS7GyofFaXBNTWU+GtSiz4eCX/9j/sh3fDDRgOgNv1qpcQ87ZlTTenbHo9lxeURxKQ2HVVt7IsrBo/SC/WgipAKnliRkkIvo7nfAs+i+kEE8wakjAs0DcB4mhqtIZRuBkLG2Nay//DcG6cltVkbKEEKmKLMkDFZgTWreOZal8nDljpVe1S8QwtwP4/6hKTef5xsOnrisxuffWTXvwYJhj/VXrjdoH7EhtHGLybzEalglkVHEGft/WrrD/0bwJpmR0RegWI4HTsSvGiHgvf5DZJx8fXPZNPnicGtlfA9ccQPuVo17bY4Qf/WIc1A8Ssv4kHSbNIYJKRymI3UFb0Z4wzBsBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBBBxDLb6pCGbittkcX6asd/gEBmMcUNupDjSECq5H09YA70eVwWWe0fBqxTxrr2cXCXtRKFvOk8SJmL0xHAWodaLN9+krTWHJcWbAK8JXEPC7rn]
profiles::accounts::root::password: ENC[PKCS7,MIIB2gYJKoZIhvcNAQcDoIIByzCCAccCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAIgzGQLoHrm7JSnWG4vdAtxSuETmnqbV7kUsQS8WCRUwFGenDFkps+OGMnOGEHLzMJzXihLfgfdWwTAI4fp48M+zhTMo9TQkdzZqtbFk3+RjV2jDF0wfe4kVUIpReOq+EkaDSkoRSG8V6hWvszhDHUrJBC9eDhomL0f3xNAWxmy5EIX/uMEvg9Ux5YX+E6k2pEIKnHNoVIaWDojlofSIzIqTSS7l3jQtJhs3YqBzLL1DsoF1kdn+Rwl5kcsKkhV+vzl76wEbpYVZW8lu4bFfP6QHMLPcep2tuUDMCDvARRXD7YyZcAtS7aMuqll+BLAszpWxAA7EU2hgvdr6t2uyVCTCBnAYJKoZIhvcNAQcBMB0GCWCGSAFlAwQBKgQQ4D5oDoyE6LPdjpVtGPoJD4BwfnQ9ORjYFPvHQmt+lgU4jMqh6BhqP0VN3lqVfUpOmiVMIqkO/cYtlwVLKEg36TPCHBSpqvhuahSF5saCVr8JY3xWOAmTSgnNjQOPlGrPnYWYbuRLxVRsU+KUkpAzR0c6VN0wYi6bI85Pcv8yHF3UYA==] profiles::accounts::root::password: ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAM79PRxeAZHrDcSm4eSFqU94/LjuSbdUmJWivX/Pa8GumoW2e/PT9nGHW3p98zHthMgCglk52PECQ+TBKjxr+9dTyNK5ePG6ZJEqSHNRqsPGm+kfQj/hlTmq8vOBaFM5GapD1iTHs5JFbGngI56swKBEVXW9+Z37BjQb2xJuyLsu5Bo/tA0BaOKuCtjq1a6E38bOX+nJ+YF1uZgV9ofAEh1YvkcTmnEWYXFRPWd7AaNcWn03V2pfhGqxc+xydak620I47P+FE+qIY72+aQ6tmLU3X9vyA1HLF2Tv572l4a2i+YIk6nAgQdi+hQKznqNL9M9YV+s1AcmcKLT7cfLrjsjA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBCMWrdCWBQgtW3NOEpERwP+gBA3KDiqe4pQq6DwRfsEXQNZ]
profiles::consul::client::secret_id_salt: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAS7pNFRX4onccFaR87zB/eFFORuF22j6xqjyeAqqjgEduYhkt6w5kkz+YfUoHUesU0Q6F2p6HrCSZ8yAsx5M25NCiud9P4hIpjKmOZ4zCNO7uhATh4AQDYw3BrdRwfO+c6jOl5wOiNLCfDBJ0sFT3akCvcuPS1xIoRJq4Gyn+uCbOsMbvSl25ld2xKt1/cqs8gc1d8mkpjwWto7t+qZSUFMCehTbehH3G4a3Q5rvfBoNwv42Wbs676BDcCurDaAzHNqE7pDbOWhGuVOBl+q+BU0Ri/CRkGcTViN9fr8Dc9SveVC6EPsMbw+05/8/NlfzQse3KAwQ34nR9tR2PQw5qEzBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBB7LywscQtF7cG2nomfEsu9gDDVqJBFP1jAX2eGZ2crYS5gnBcsRwhc0HNo2/WWdhZprMW+vEJOOGXDelI53NxA3o0=] profiles::consul::client::secret_id_salt: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAS7pNFRX4onccFaR87zB/eFFORuF22j6xqjyeAqqjgEduYhkt6w5kkz+YfUoHUesU0Q6F2p6HrCSZ8yAsx5M25NCiud9P4hIpjKmOZ4zCNO7uhATh4AQDYw3BrdRwfO+c6jOl5wOiNLCfDBJ0sFT3akCvcuPS1xIoRJq4Gyn+uCbOsMbvSl25ld2xKt1/cqs8gc1d8mkpjwWto7t+qZSUFMCehTbehH3G4a3Q5rvfBoNwv42Wbs676BDcCurDaAzHNqE7pDbOWhGuVOBl+q+BU0Ri/CRkGcTViN9fr8Dc9SveVC6EPsMbw+05/8/NlfzQse3KAwQ34nR9tR2PQw5qEzBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBB7LywscQtF7cG2nomfEsu9gDDVqJBFP1jAX2eGZ2crYS5gnBcsRwhc0HNo2/WWdhZprMW+vEJOOGXDelI53NxA3o0=]
profiles::consul::token::node_editor::secret_id: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAO8IIF2r18dFf0bVKEwjJUe1TmXHH0AIzsQHxkHwV7d37kvH1cY9rYw0TtdHn7GTxvotJG7GZbWvbunpBs1g2p2RPADiM6TMhbO8mJ0tAWLnMk7bQ221xu8Pc7KceqWmU17dmgNhVCohyfwJNqbA756TlHVgxGA0LtNrKoLOmgKGXAL1VYZoKEQnWq7xOpO+z3e1UfjoO6CvX/Od2hGYfUkHdro8mwRw4GFKzU7XeKFdAMUGpn5rVmY3xe+1ARXwGFaSrTHzk2n85pvwhPRlQ+OwqzyT19Qo2FNeAO6RoCRIFTtqbsjTWPUlseHIhw4Q5bHO1I0Mrlm5IHDESw/22IzBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBCEe9wD72qxnpeq5nCi/d7BgDCP29sDFObkFTabt2uZ/nF9MT1g+QOrrdFKgnG6ThnwH1hwpZPsSVgIs+yRQH8laB4=] profiles::consul::token::node_editor::secret_id: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAO8IIF2r18dFf0bVKEwjJUe1TmXHH0AIzsQHxkHwV7d37kvH1cY9rYw0TtdHn7GTxvotJG7GZbWvbunpBs1g2p2RPADiM6TMhbO8mJ0tAWLnMk7bQ221xu8Pc7KceqWmU17dmgNhVCohyfwJNqbA756TlHVgxGA0LtNrKoLOmgKGXAL1VYZoKEQnWq7xOpO+z3e1UfjoO6CvX/Od2hGYfUkHdro8mwRw4GFKzU7XeKFdAMUGpn5rVmY3xe+1ARXwGFaSrTHzk2n85pvwhPRlQ+OwqzyT19Qo2FNeAO6RoCRIFTtqbsjTWPUlseHIhw4Q5bHO1I0Mrlm5IHDESw/22IzBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBCEe9wD72qxnpeq5nCi/d7BgDCP29sDFObkFTabt2uZ/nF9MT1g+QOrrdFKgnG6ThnwH1hwpZPsSVgIs+yRQH8laB4=]
profiles::consul::server::acl_tokens_initial_management: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAi1UH7AZirJ1PdxWy+KEgS5ufm0wbn2xy9rkg14hKYpcVjBa4pOZpSLMGMiiUpBIqBytDMZM4ezYa/luktpkBImJbM/TE16beGtsacQGA+9eZk2Tihs9GR2qbAQiu5lLITiDlwNnf0GeWdqHM8CTeD68DczQF320d9U14/k6pG/7z+w/MGLcjsQoSuOFTm42JVn1BI46t1CYSCHMXQc/9Tfs+FzI+vumohI8DxAYBIuyzU5HBX/MntAsvD/yixMJS1pZL9WwgqZJC/wK34rVRB39DpxWf/WROrI+WLuSJwr7WBjaeF9Ju+89WKCgsI53EWhFTj8GgDZm/jqPoE478NjBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBAoACRzJdQKNYXZv6cghFIIgDAzB81DMcuY815nb8POtZpiA06jT/068AoZmSctHoFK/zW9tY229N5r1Tb+WHElqLk=] profiles::consul::server::acl_tokens_initial_management: ENC[PKCS7,MIIBmQYJKoZIhvcNAQcDoIIBijCCAYYCAQAxggEhMIIBHQIBADAFMAACAQEwDQYJKoZIhvcNAQEBBQAEggEAi1UH7AZirJ1PdxWy+KEgS5ufm0wbn2xy9rkg14hKYpcVjBa4pOZpSLMGMiiUpBIqBytDMZM4ezYa/luktpkBImJbM/TE16beGtsacQGA+9eZk2Tihs9GR2qbAQiu5lLITiDlwNnf0GeWdqHM8CTeD68DczQF320d9U14/k6pG/7z+w/MGLcjsQoSuOFTm42JVn1BI46t1CYSCHMXQc/9Tfs+FzI+vumohI8DxAYBIuyzU5HBX/MntAsvD/yixMJS1pZL9WwgqZJC/wK34rVRB39DpxWf/WROrI+WLuSJwr7WBjaeF9Ju+89WKCgsI53EWhFTj8GgDZm/jqPoE478NjBcBgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBAoACRzJdQKNYXZv6cghFIIgDAzB81DMcuY815nb8POtZpiA06jT/068AoZmSctHoFK/zW9tY229N5r1Tb+WHElqLk=]

View File

@ -36,12 +36,6 @@ lookup_options:
profiles::haproxy::server::listeners: profiles::haproxy::server::listeners:
merge: merge:
strategy: deep strategy: deep
profiles::accounts::root::sshkeys:
merge:
strategy: deep
profiles::accounts::sysadmin::sshkeys:
merge:
strategy: deep
haproxy::backend: haproxy::backend:
merge: merge:
strategy: deep strategy: deep
@ -129,9 +123,6 @@ lookup_options:
profiles::ceph::client::keyrings: profiles::ceph::client::keyrings:
merge: merge:
strategy: deep strategy: deep
profiles::ceph::conf::config:
merge:
strategy: deep
profiles::nginx::simpleproxy::locations: profiles::nginx::simpleproxy::locations:
merge: merge:
strategy: deep strategy: deep
@ -158,24 +149,6 @@ lookup_options:
zfs::datasets: zfs::datasets:
merge: merge:
strategy: deep strategy: deep
rke2::config_hash:
merge:
strategy: deep
postfix::configs:
merge:
strategy: deep
postfix::maps:
merge:
strategy: deep
postfix::virtuals:
merge:
strategy: deep
stalwart::postgresql_password:
convert_to: Sensitive
stalwart::s3_secret_key:
convert_to: Sensitive
stalwart::fallback_admin_password:
convert_to: Sensitive
facts_path: '/opt/puppetlabs/facter/facts.d' facts_path: '/opt/puppetlabs/facter/facts.d'
@ -186,13 +159,17 @@ hiera_include:
- profiles::accounts::rundeck - profiles::accounts::rundeck
- limits - limits
- sysctl::base - sysctl::base
- exporters::node_exporter
profiles::ntp::client::ntp_role: 'roles::infra::ntp::server'
profiles::ntp::client::use_ntp: 'region'
profiles::ntp::client::peers: profiles::ntp::client::peers:
- 0.au.pool.ntp.org - 0.pool.ntp.org
- 1.au.pool.ntp.org - 1.pool.ntp.org
- 2.au.pool.ntp.org - 2.pool.ntp.org
- 3.au.pool.ntp.org - 3.pool.ntp.org
profiles::base::puppet_servers:
- 'prodinf01n01.main.unkin.net'
consul::install_method: 'package' consul::install_method: 'package'
consul::manage_repo: false consul::manage_repo: false
@ -224,9 +201,6 @@ profiles::consul::client::node_rules:
- resource: node - resource: node
segment: '' segment: ''
disposition: read disposition: read
- resource: service
segment: node_exporter
disposition: write
profiles::packages::include: profiles::packages::include:
bash-completion: {} bash-completion: {}
@ -310,8 +284,7 @@ profiles::puppet::client::dns_alt_names:
puppetdbapi: puppetdbapi.query.consul puppetdbapi: puppetdbapi.query.consul
puppetdbsql: puppetdbsql.service.au-syd1.consul puppetdbsql: puppetdbsql.service.au-syd1.consul
exporters::node_exporter::enable: true prometheus::node_exporter::export_scrape_job: true
exporters::node_exporter::cleanup_old_node_exporter: true
prometheus::systemd_exporter::export_scrape_job: true prometheus::systemd_exporter::export_scrape_job: true
ssh::server::storeconfigs_enabled: false ssh::server::storeconfigs_enabled: false
@ -376,81 +349,11 @@ networking::route_defaults:
netmask: 0.0.0.0 netmask: 0.0.0.0
network: default network: default
# logging:
victorialogs::client::journald::enable: true
victorialogs::client::journald::inserturl: https://vlinsert.service.consul:9428/insert/journald
# FIXME these are for the proxmox ceph cluster
profiles::ceph::client::fsid: 7f7f00cb-95de-498c-8dcc-14b54e4e9ca8 profiles::ceph::client::fsid: 7f7f00cb-95de-498c-8dcc-14b54e4e9ca8
profiles::ceph::client::mons: profiles::ceph::client::mons:
- 10.18.15.1 - 10.18.15.1
- 10.18.15.2 - 10.18.15.2
- 10.18.15.3 - 10.18.15.3
profiles::ceph::conf::config:
global:
auth_client_required: 'cephx'
auth_cluster_required: 'cephx'
auth_service_required: 'cephx'
fsid: 'de96a98f-3d23-465a-a899-86d3d67edab8'
mon_allow_pool_delete: true
mon_initial_members: 'prodnxsr0009,prodnxsr0010,prodnxsr0011,prodnxsr0012,prodnxsr0013'
mon_host: '198.18.23.9,198.18.23.10,198.18.23.11,198.18.23.12,198.18.23.13'
ms_bind_ipv4: true
ms_bind_ipv6: false
osd_crush_chooseleaf_type: 1
osd_pool_default_min_size: 2
osd_pool_default_size: 3
osd_pool_default_pg_num: 128
public_network: >
198.18.23.1/32,198.18.23.2/32,198.18.23.3/32,198.18.23.4/32,
198.18.23.5/32,198.18.23.6/32,198.18.23.7/32,198.18.23.8/32,
198.18.23.9/32,198.18.23.10/32,198.18.23.11/32,198.18.23.12/32,
198.18.23.13/32
client.rgw.ausyd1nxvm2115:
rgw_realm: unkin
rgw_zonegroup: au
rgw_zone: syd1
client.rgw.ausyd1nxvm2116:
rgw_realm: unkin
rgw_zonegroup: au
rgw_zone: syd1
client.rgw.ausyd1nxvm2117:
rgw_realm: unkin
rgw_zonegroup: au
rgw_zone: syd1
client.rgw.ausyd1nxvm2118:
rgw_realm: unkin
rgw_zonegroup: au
rgw_zone: syd1
client.rgw.ausyd1nxvm2119:
rgw_realm: unkin
rgw_zonegroup: au
rgw_zone: syd1
mds:
keyring: /var/lib/ceph/mds/ceph-$id/keyring
mds_standby_replay: true
mds.prodnxsr0009-1:
host: prodnxsr0009
mds.prodnxsr0009-2:
host: prodnxsr0009
mds.prodnxsr0010-1:
host: prodnxsr0010
mds.prodnxsr0010-2:
host: prodnxsr0010
mds.prodnxsr0011-1:
host: prodnxsr0011
mds.prodnxsr0011-2:
host: prodnxsr0011
mds.prodnxsr0012-1:
host: prodnxsr0012
mds.prodnxsr0012-2:
host: prodnxsr0012
mds.prodnxsr0013-1:
host: prodnxsr0013
mds.prodnxsr0013-2:
host: prodnxsr0013
#profiles::base::hosts::additional_hosts: #profiles::base::hosts::additional_hosts:
# - ip: 198.18.17.9 # - ip: 198.18.17.9
# hostname: prodinf01n09.main.unkin.net # hostname: prodinf01n09.main.unkin.net

View File

@ -1,9 +1,7 @@
--- ---
timezone::timezone: 'Australia/Sydney' timezone::timezone: 'Australia/Sydney'
certbot::client::webserver: ausyd1nxvm2057.main.unkin.net certbot::client::webserver: ausyd1nxvm1021.main.unkin.net
profiles_dns_upstream_forwarder_unkin: profiles_dns_upstream_forwarder_unkin:
- 198.18.19.15 - 198.18.19.15
profiles_dns_upstream_forwarder_consul: profiles_dns_upstream_forwarder_consul:
- 198.18.19.14 - 198.18.19.14
profiles_dns_upstream_forwarder_k8s:
- 198.18.19.20

View File

@ -3,7 +3,7 @@ hiera_include:
- keepalived - keepalived
# keepalived # keepalived
profiles::haproxy::dns::ipaddr: '198.18.13.250' profiles::haproxy::dns::vrrp_ipaddr: '198.18.13.250'
profiles::haproxy::dns::vrrp_cnames: profiles::haproxy::dns::vrrp_cnames:
- sonarr.main.unkin.net - sonarr.main.unkin.net
- radarr.main.unkin.net - radarr.main.unkin.net

View File

@ -1,424 +0,0 @@
---
profiles::haproxy::dns::ipaddr: "%{hiera('anycast_ip')}"
profiles::haproxy::dns::vrrp_cnames:
- sonarr.main.unkin.net
- radarr.main.unkin.net
- lidarr.main.unkin.net
- readarr.main.unkin.net
- prowlarr.main.unkin.net
- nzbget.main.unkin.net
- git.unkin.net
- fafflix.unkin.net
- grafana.unkin.net
- dashboard.ceph.unkin.net
- mail-webadmin.main.unkin.net
- mail-in.main.unkin.net
- mail.main.unkin.net
- autoconfig.main.unkin.net
- autodiscover.main.unkin.net
profiles::haproxy::mappings:
fe_http:
ensure: present
mappings:
- 'au-syd1-pve.main.unkin.net be_ausyd1pve_web'
- 'au-syd1-pve-api.main.unkin.net be_ausyd1pve_api'
- 'sonarr.main.unkin.net be_sonarr'
- 'radarr.main.unkin.net be_radarr'
- 'lidarr.main.unkin.net be_lidarr'
- 'readarr.main.unkin.net be_readarr'
- 'prowlarr.main.unkin.net be_prowlarr'
- 'nzbget.main.unkin.net be_nzbget'
- 'jellyfin.main.unkin.net be_jellyfin'
- 'fafflix.unkin.net be_jellyfin'
- 'git.unkin.net be_gitea'
- 'grafana.unkin.net be_grafana'
- 'dashboard.ceph.unkin.net be_ceph_dashboard'
- 'mail-webadmin.main.unkin.net be_stalwart_webadmin'
- 'autoconfig.main.unkin.net be_stalwart_webadmin'
- 'autodiscovery.main.unkin.net be_stalwart_webadmin'
fe_https:
ensure: present
mappings:
- 'au-syd1-pve.main.unkin.net be_ausyd1pve_web'
- 'au-syd1-pve-api.main.unkin.net be_ausyd1pve_api'
- 'sonarr.main.unkin.net be_sonarr'
- 'radarr.main.unkin.net be_radarr'
- 'lidarr.main.unkin.net be_lidarr'
- 'readarr.main.unkin.net be_readarr'
- 'prowlarr.main.unkin.net be_prowlarr'
- 'nzbget.main.unkin.net be_nzbget'
- 'jellyfin.main.unkin.net be_jellyfin'
- 'fafflix.unkin.net be_jellyfin'
- 'git.unkin.net be_gitea'
- 'grafana.unkin.net be_grafana'
- 'dashboard.ceph.unkin.net be_ceph_dashboard'
- 'mail-webadmin.main.unkin.net be_stalwart_webadmin'
- 'autoconfig.main.unkin.net be_stalwart_webadmin'
- 'autodiscovery.main.unkin.net be_stalwart_webadmin'
profiles::haproxy::frontends:
fe_http:
options:
use_backend:
- "%[req.hdr(host),lower,map(/etc/haproxy/fe_http.map,be_default)]"
fe_https:
options:
acl:
- 'acl_ausyd1pve req.hdr(host) -i au-syd1-pve.main.unkin.net'
- 'acl_sonarr req.hdr(host) -i sonarr.main.unkin.net'
- 'acl_radarr req.hdr(host) -i radarr.main.unkin.net'
- 'acl_lidarr req.hdr(host) -i lidarr.main.unkin.net'
- 'acl_readarr req.hdr(host) -i readarr.main.unkin.net'
- 'acl_prowlarr req.hdr(host) -i prowlarr.main.unkin.net'
- 'acl_nzbget req.hdr(host) -i nzbget.main.unkin.net'
- 'acl_jellyfin req.hdr(host) -i jellyfin.main.unkin.net'
- 'acl_fafflix req.hdr(host) -i fafflix.unkin.net'
- 'acl_gitea req.hdr(host) -i git.unkin.net'
- 'acl_grafana req.hdr(host) -i grafana.unkin.net'
- 'acl_ceph_dashboard req.hdr(host) -i dashboard.ceph.unkin.net'
- 'acl_stalwart_webadmin req.hdr(host) -i mail-webadmin.main.unkin.net'
- 'acl_stalwart_webadmin req.hdr(host) -i autoconfig.main.unkin.net'
- 'acl_stalwart_webadmin req.hdr(host) -i autodiscovery.main.unkin.net'
- 'acl_internalsubnets src 198.18.0.0/16 10.10.12.0/24'
use_backend:
- "%[req.hdr(host),lower,map(/etc/haproxy/fe_https.map,be_default)]"
http-request:
- 'deny if { hdr_dom(host) -i au-syd1-pve.main.unkin.net } !acl_internalsubnets'
http-response:
- 'set-header X-Frame-Options DENY if acl_ausyd1pve'
- 'set-header X-Frame-Options DENY if acl_sonarr'
- 'set-header X-Frame-Options DENY if acl_radarr'
- 'set-header X-Frame-Options DENY if acl_lidarr'
- 'set-header X-Frame-Options DENY if acl_readarr'
- 'set-header X-Frame-Options DENY if acl_prowlarr'
- 'set-header X-Frame-Options DENY if acl_nzbget'
- 'set-header X-Frame-Options DENY if acl_jellyfin'
- 'set-header X-Frame-Options DENY if acl_fafflix'
- 'set-header X-Frame-Options DENY if acl_gitea'
- 'set-header X-Frame-Options DENY if acl_grafana'
- 'set-header X-Frame-Options DENY if acl_ceph_dashboard'
- 'set-header X-Frame-Options DENY if acl_stalwart_webadmin'
- 'set-header X-Content-Type-Options nosniff'
- 'set-header X-XSS-Protection 1;mode=block'
profiles::haproxy::backends:
be_ausyd1pve_web:
description: Backend for au-syd1 pve cluster (Web)
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_ausyd1pve_api:
description: Backend for au-syd1 pve cluster (API only)
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_sonarr:
description: Backend for au-syd1 sonarr
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_radarr:
description: Backend for au-syd1 radarr
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_lidarr:
description: Backend for au-syd1 lidarr
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_readarr:
description: Backend for au-syd1 readarr
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_prowlarr:
description: Backend for au-syd1 prowlarr
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_nzbget:
description: Backend for au-syd1 nzbget
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /consul/health
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_jellyfin:
description: Backend for au-syd1 jellyfin
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
be_gitea:
description: Backend for gitea cluster
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
be_grafana:
description: Backend for grafana nodes
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 443 }
redirect: 'scheme https if !{ ssl_fc }'
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
be_ceph_dashboard:
description: Backend for Ceph Dashboard from Mgr instances
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-check:
- expect status 200
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 9443 }
redirect: 'scheme https if !{ ssl_fc }'
stick-table: 'type ip size 200k expire 30m'
be_stalwart_webadmin:
description: Backend for Stalwart Webadmin
collect_exported: false # handled in custom function
options:
balance: roundrobin
option:
- httpchk GET /
- forwardfor
- http-keep-alive
- prefer-last-server
cookie: SRVNAME insert indirect nocache
http-reuse: always
http-check:
- expect status 200
http-request:
- set-header X-Forwarded-Port %[dst_port]
- add-header X-Forwarded-Proto https if { dst_port 9443 }
redirect: 'scheme https if !{ ssl_fc }'
stick-table: 'type ip size 200k expire 30m'
be_stalwart_imap:
description: Backend for Stalwart IMAP (STARTTLS)
collect_exported: false
options:
mode: tcp
balance: roundrobin
option:
- tcp-check
- prefer-last-server
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
tcp-check:
- connect port 143 send-proxy
- expect string "* OK"
- send "A001 STARTTLS\r\n"
- expect rstring "A001 (OK|2.0.0)"
be_stalwart_imaps:
description: Backend for Stalwart IMAPS (implicit TLS)
collect_exported: false
options:
mode: tcp
balance: roundrobin
option:
- tcp-check
- prefer-last-server
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
tcp-check:
- connect ssl send-proxy
- expect string "* OK"
be_stalwart_smtp:
description: Backend for Stalwart SMTP
collect_exported: false
options:
mode: tcp
balance: roundrobin
option:
- tcp-check
- prefer-last-server
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
tcp-check:
- connect port 25 send-proxy
- expect string "220 "
be_stalwart_submission:
description: Backend for Stalwart SMTP Submission
collect_exported: false
options:
mode: tcp
balance: roundrobin
option:
- tcp-check
- prefer-last-server
stick-table: 'type ip size 200k expire 30m'
stick: 'on src'
tcp-check:
- connect port 587 send-proxy
- expect string "220 "
profiles::haproxy::certlist::enabled: true
profiles::haproxy::certlist::certificates:
- /etc/pki/tls/letsencrypt/au-syd1-pve.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/au-syd1-pve-api.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/sonarr.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/radarr.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/lidarr.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/readarr.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/prowlarr.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/nzbget.main.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/fafflix.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/git.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/grafana.unkin.net/fullchain_combined.pem
- /etc/pki/tls/letsencrypt/dashboard.ceph.unkin.net/fullchain_combined.pem
- /etc/pki/tls/vault/certificate.pem
# additional altnames
profiles::pki::vault::alt_names:
- au-syd1-pve.main.unkin.net
- au-syd1-pve-api.main.unkin.net
- jellyfin.main.unkin.net
- mail-webadmin.main.unkin.net
# additional cnames
profiles::haproxy::dns::cnames:
- au-syd1-pve.main.unkin.net
- au-syd1-pve-api.main.unkin.net
# letsencrypt certificates
certbot::client::service: haproxy
certbot::client::domains:
- au-syd1-pve.main.unkin.net
- au-syd1-pve-api.main.unkin.net
- sonarr.main.unkin.net
- radarr.main.unkin.net
- lidarr.main.unkin.net
- readarr.main.unkin.net
- prowlarr.main.unkin.net
- nzbget.main.unkin.net
- fafflix.unkin.net
- git.unkin.net
- grafana.unkin.net
- dashboard.ceph.unkin.net

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.10
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.11
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.12
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.13
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.14
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.15
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.16
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.17
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.18
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.19
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.20
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.21
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.22
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.23
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.24
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,13 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.25
networking::routes:
default:
gateway: 198.18.13.254
profiles::haproxy::dns::vrrp_master: true
keepalived::vrrp_instance:
VI_250:
state: 'MASTER'
priority: 101

View File

@ -0,0 +1,12 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.26
networking::routes:
default:
gateway: 198.18.13.254
keepalived::vrrp_instance:
VI_250:
state: 'BACKUP'
priority: 100

View File

@ -0,0 +1,11 @@
---
profiles::cobbler::params::is_cobbler_master: true
networking::interfaces:
ens18:
ipaddress: 198.18.13.27
networking::routes:
default:
gateway: 198.18.13.254
interface: ens18
profiles::almalinux::base::remove_ens18: false

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.28
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.29
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.30
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,10 @@
---
networking::interfaces:
ens18:
ipaddress: 198.18.13.31
networking::routes:
default:
gateway: 198.18.13.254
interface: ens18
profiles::almalinux::base::remove_ens18: false

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.32
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.33
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.34
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.35
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.36
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.37
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.38
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.39
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.40
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.41
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.42
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.43
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.44
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.45
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -13,3 +13,9 @@ profiles::ssh::sign::principals:
profiles::puppet::puppetca::is_puppetca: true profiles::puppet::puppetca::is_puppetca: true
profiles::puppet::puppetca::allow_subject_alt_names: true profiles::puppet::puppetca::allow_subject_alt_names: true
networking::interfaces:
eth0:
ipaddress: 198.18.13.46
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.47
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.47
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.48
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.49
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.50
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.50
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.51
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.51
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.52
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.52
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.53
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.53
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.54
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.55
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.56
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.57
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.57
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,14 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.58
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.58
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.59
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.60
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.61
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.62
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.63
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.64
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.65
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.66
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.67
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.68
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.69
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.70
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.71
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.72
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.73
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.74
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.74
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.64.254/24'

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.75
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.75
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.65.254/24'

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.76
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.76
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.66.254/24'

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.77
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.77
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.67.254/24'

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.78
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.78
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.68.254/24'

View File

@ -0,0 +1,15 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.79
ens19:
ensure: present
family: inet
method: static
ipaddress: 10.18.15.79
netmask: 255.255.255.0
onboot: true
networking::routes:
default:
gateway: 198.18.13.254
docker::bip: '198.18.69.254/24'

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.80
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.81
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,7 @@
---
networking::interfaces:
eth0:
ipaddress: 198.18.13.82
networking::routes:
default:
gateway: 198.18.13.254

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
profiles::consul::server::anycast_ip: 198.18.19.14
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('profiles::consul::server::anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
profiles::consul::server::anycast_ip: 198.18.19.14
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('profiles::consul::server::anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
profiles::consul::server::anycast_ip: 198.18.19.14
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('profiles::consul::server::anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
profiles::consul::server::anycast_ip: 198.18.19.14
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('profiles::consul::server::anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
profiles::consul::server::anycast_ip: 198.18.19.14
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('profiles::consul::server::anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_master_anycast_ip: 198.18.19.15
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_master_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_master_anycast_ip: 198.18.19.15
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_master_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_master_anycast_ip: 198.18.19.15
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_master_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_resolver_anycast_ip: 198.18.19.16
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_resolver_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_resolver_anycast_ip: 198.18.19.16
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_resolver_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -0,0 +1,47 @@
---
hiera_include:
- frrouting
# networking
dns_resolver_anycast_ip: 198.18.19.16
systemd::manage_networkd: true
systemd::manage_all_network_files: true
networking::interfaces:
eth0:
type: physical
forwarding: true
dhcp: true
anycast0:
type: dummy
ipaddress: "%{hiera('dns_resolver_anycast_ip')}"
netmask: 255.255.255.255
mtu: 1500
# frrouting
frrouting::ospfd_router_id: "%{facts.networking.ip}"
frrouting::ospfd_redistribute:
- connected
frrouting::ospfd_interfaces:
eth0:
area: 0.0.0.0
anycast0:
area: 0.0.0.0
frrouting::daemons:
ospfd: true
# additional repos
profiles::yum::global::repos:
frr-extras:
name: frr-extras
descr: frr-extras repository
target: /etc/yum.repos.d/frr-extras.repo
baseurl: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/extras-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent
frr-stable:
name: frr-stable
descr: frr-stable repository
target: /etc/yum.repos.d/frr-stable.repo
baseurl: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os
gpgkey: https://packagerepo.service.consul/frr/el9/stable-daily/%{facts.os.architecture}/os/RPM-GPG-KEY-FRR
mirrorlist: absent

View File

@ -1,2 +0,0 @@
---
profiles::cobbler::params::is_cobbler_master: true

View File

@ -0,0 +1,12 @@
---
profiles::puppet::server::dns_alt_names:
- puppetca.main.unkin.net
- puppetca.service.consul
- puppetca.query.consul
- puppetca
profiles::puppet::puppetca::is_puppetca: false
profiles::puppet::puppetca::allow_subject_alt_names: true
hiera_exclude:
- networking

View File

@ -1,13 +1,5 @@
--- ---
networking_loopback0_ip: 198.18.19.1 # management loopback profiles::proxmox::params::pve_clusterinit_master: true
networking_loopback1_ip: 198.18.22.1 # ceph-cluster loopback profiles::proxmox::params::pve_ceph_mon: true
networking_loopback2_ip: 198.18.23.1 # ceph-public loopback profiles::proxmox::params::pve_ceph_mgr: true
networking_1000_ip: 198.18.15.1 # 1gbe network profiles::proxmox::params::pve_ceph_osd: true
networking_2500_ip: 198.18.21.1 # 2.5gbe network
networking_1000_iface: enp2s0
networking_2500_iface: enp3s0
networking::interfaces:
"%{hiera('networking_1000_iface')}":
mac: d8:9e:f3:75:c3:60
"%{hiera('networking_2500_iface')}":
mac: 00:ac:d0:00:00:50

View File

@ -1,13 +1,4 @@
--- ---
networking_loopback0_ip: 198.18.19.2 # management loopback profiles::proxmox::params::pve_ceph_mon: true
networking_loopback1_ip: 198.18.22.2 # ceph-cluster loopback profiles::proxmox::params::pve_ceph_mgr: true
networking_loopback2_ip: 198.18.23.2 # ceph-public loopback profiles::proxmox::params::pve_ceph_osd: true
networking_1000_ip: 198.18.15.2 # 1gbe network
networking_2500_ip: 198.18.21.2 # 2.5gbe network
networking_1000_iface: enp2s0
networking_2500_iface: enp3s0
networking::interfaces:
"%{hiera('networking_1000_iface')}":
mac: d8:9e:f3:74:b6:08
"%{hiera('networking_2500_iface')}":
mac: 00:e0:4c:68:08:43

View File

@ -1,13 +1,4 @@
--- ---
networking_loopback0_ip: 198.18.19.3 # management loopback profiles::proxmox::params::pve_ceph_mon: true
networking_loopback1_ip: 198.18.22.3 # ceph-cluster loopback profiles::proxmox::params::pve_ceph_mgr: true
networking_loopback2_ip: 198.18.23.3 # ceph-public loopback profiles::proxmox::params::pve_ceph_osd: true
networking_1000_ip: 198.18.15.3 # 1gbe network
networking_2500_ip: 198.18.21.3 # 2.5gbe network
networking_1000_iface: enp2s0
networking_2500_iface: enp3s0
networking::interfaces:
"%{hiera('networking_1000_iface')}":
mac: b8:85:84:a3:25:c5
"%{hiera('networking_2500_iface')}":
mac: 00:e0:4c:68:07:82

View File

@ -1,13 +1,2 @@
--- ---
networking_loopback0_ip: 198.18.19.4 # management loopback profiles::proxmox::params::pve_ceph_osd: true
networking_loopback1_ip: 198.18.22.4 # ceph-cluster loopback
networking_loopback2_ip: 198.18.23.4 # ceph-public loopback
networking_1000_ip: 198.18.15.4 # 1gbe network
networking_2500_ip: 198.18.21.4 # 2.5gbe network
networking_1000_iface: enp2s0
networking_2500_iface: enp3s0
networking::interfaces:
"%{hiera('networking_1000_iface')}":
mac: d8:9e:f3:75:d5:00
"%{hiera('networking_2500_iface')}":
mac: 00:ac:d0:00:00:43

Some files were not shown because too many files have changed in this diff Show More