init coming over from kvm repo

This commit is contained in:
Doug Masiero 2025-04-23 17:58:04 -04:00
parent 0d13a0172b
commit de6252a4ee
28 changed files with 1312 additions and 0 deletions

7
.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
.modules/
.resource_types/
bolt-debug.log
.plan_cache.json
.plugin_cache.json
.task_cache.json
.rerun.json

View File

@ -1,2 +1,89 @@
# bolt
## Creating the VM using Bolt
Update the parameters provided to the below plan run command as needed. Note that you should always run plans and tasks out of the `bolt` directory.
```bash
cd bolt
bolt plan run ubuntu::create_vm \
target_host=vortex \
vm_name=moeny-bank01 \
hostname=moeny-bank01
ip_with_cidr=100.40.223.189/24 \
```
## Alpine VMs
There are now separate plans for generating a VM using Alpine and Ubuntu. [alpine::create_vm](bolt/vm_automation/alpine/plans/create_vm.yaml) should be run for Alpine and [ubuntu::create_vm](bolt/vm_automation/ubuntu/plans/create_vm.yaml) should be run for Ubuntu. These plans each run tasks tailored for the appropriate distribution.
Below is a sample command to run the Alpine bolt plan.
```bash
bolt plan run alpine::create_vm \
vm_name=moeny-service \
hostname=moeny-service \
ip_with_cidr=100.40.223.189/24 \
add_a_record_bool=true \
dns_hostname=service
```
Note that `add_a_record_bool` must be set to `true` if you would like an A record for the VM to be added to the DNS server zone file, as it is `false` by default. If using this functionality, `dns_hostname` should also be provided and optionally `dns_ttl` if you do not want the default of `3600`. The ability to interact with the DNS server depends on having set up a TSIG key on your DNS server for dynamic updates and storing a copy of your `tsig.key` file in a directory called `keys` at the root of the bolt project, alongside `bolt-project.yaml`. If either of these conditions have not been met, do not attempt to use this functionality. For more information on setting up dynamic DNS with a TSIG key, see our [bind9](https://gitea.moeny.ai/moeny/bind9) repo.
Similarly, `install_docker_bool` can be set to `false` if you do not want docker to be installed on the VM. It is true by default.
For more detailed logging on the `bolt plan run` add the `-v` flag at the end of the command.
If you want to delete an A record that you have added, you can use the [`delete_dns_a_record`](bolt/vm_automation/common/tasks/delete_dns_a_record.sh) task. You'll just need to provide it with the dns_hostname you set. Here's a sample command.
```bash
bolt task run common::delete_dns_a_record dns_hostname=service --targets localhost
```
Lastly, even though it is designed to be run with the `create_alpine` plan, you can also run the [`add_dns_a_record`](bolt/vm_automation/common/tasks/add_dns_a_record.sh) task on its own. You'll just need to provide it a few parameters. Here's a sample command.
```bash
bolt task run common::add_dns_a_record add_a_record_bool=true ip_with_cidr=100.40.223.189/24 dns_hostname=service dns_ttl=3600 --targets localhost
```
Alternatively, to update DNS with the `nsupdate` command directly from the terminal, run something like the following with the path to your `tsig.key`:
```bash
nsupdate -k ./keys/tsig.key << EOF
server ns1.moeny.ai
debug yes
zone moeny.ai
update add service.moeny.ai 3600 A 6.5.2.5
send
EOF
ssh moeny@ns1.moeny.ai "sudo rndc sync moeny.ai"
```
## VMs on an Internal Network
In order to spin up VMs on an internal network, you will need to generate an Alpine iso compatible with the internal IPs you are using and specify its path. You will also want to declare the staging IP and gateway IP parameters accordingly. Here is a sample command to run the Alpine bolt plan.
```bash
bolt plan run alpine::create_vm \
vm_name=moeny-service-alpine \
hostname=moeny-service-alpine \
network=internal-moeny \
ip_with_cidr=10.44.0.20/24 \
gateway_ip=10.44.0.1 \
iso_path=/mnt/nfs/kvm-isos/iso-build/alpine-autoinstall-internal_moeny.iso \
staging_ip=10.44.0.250 -v
```
Similarly, a new Ubuntu iso will need to be generated that is compatible with the internal IPs. This can be done by simply updating the `user-data` file from Step 6 to have the proper network configuration, as in [`user-data-internal`](user-data-internal.yaml). Here is a sample command to run the Ubuntu bolt plan.
```bash
bolt plan run ubuntu::create_vm \
vm_name=moeny-service-ubuntu \
hostname=moeny-service-ubuntu \
network=internal-moeny \
ip_with_cidr=10.44.0.20/24 \
gateway_ip=10.44.0.1 \
iso_path=/mnt/nfs/kvm-isos/iso-build/ubuntu-22.04-autoinstall-internal_moeny.iso \
staging_ip=internal -v
```

4
bolt-project.yaml Normal file
View File

@ -0,0 +1,4 @@
---
name: vm_automation
modulepath:
- vm_automation

View File

@ -0,0 +1,20 @@
#!/sbin/openrc-run
description="Custom moeny network and iptables setup"
depend() {
# Run after networking and libvirt (if used) are up
after network-online libvirtd
need net
}
start() {
ebegin "Setting moeny network routes and iptables"
/usr/local/bin/setup-moeny-network.sh
eend $?
}
stop() {
ebegin "Stopping moeny network Setup (no-op)"
eend 0
}

View File

@ -0,0 +1,15 @@
#!/bin/sh
# Wait for interfaces to be up (optional, adjust as needed)
while ! ip link show virbr0 >/dev/null 2>&1 || ! ip link show br1 >/dev/null 2>&1; do
sleep 1
done
# Routing table setup
ip route add 10.88.0.0/24 via 10.44.0.3 dev virbr0
# Forwarding rules for traffic coming from and to masiero LAN.
# These rules are saved via iptables-save in /etc/iptables/rules.v4 - They are commented below for reference.
#
# iptables -I FORWARD 1 -i br1 -o virbr0 -d 10.44.0.0/24 -j ACCEPT
# iptables -I FORWARD 2 -i virbr0 -o br1 -s 10.44.0.0/24 -j ACCEPT

79
inventory.yaml Normal file
View File

@ -0,0 +1,79 @@
groups:
- name: remote-host
config:
transport: ssh
ssh:
user: root
host-key-check: false
targets:
- name: vortex
uri: vortex.moeny.ai
- name: astrocore
uri: astrocore.masiero.us
- name: new-vm
config:
transport: ssh
ssh:
user: moeny
private-key: ~/.ssh/DMMF-20211104
host-key-check: false
targets:
- name: public
config:
ssh:
host: 100.40.223.190
- name: internal
config:
ssh:
host: 10.44.0.250
- name: alpine-vms
config:
transport: ssh
ssh:
user: moeny
private-key: ~/.ssh/DMMF-20211104
host-key-check: false
targets:
- name: moeny-ns99
uri: ns99.moeny.internal
- name: moeny-vaultwarden01
uri: vault.moeny.internal
- name: moeny-victorinox
uri: victorinox.moeny.ai
- name: moeny-vpn01
uri: vpn01.moeny.ai
- name: ubuntu-vms
config:
transport: ssh
ssh:
user: moeny
private-key: ~/.ssh/DMMF-20211104
host-key-check: false
targets:
- name: moeny-appflowy01
uri: appflowy.moeny.ai
- name: moeny-asterisk01
uri: asterisk.moeny.ai
- name: moeny-gitea01
uri: gitea.moeny.ai
- name: moeny-jitsi01
uri: jitsi.moeny.ai
- name: moeny-mail01
uri: mail01.moeny.ai
- name: moeny-ns01
uri: ns1.moeny.ai
- name: moeny-plausible01
uri: plausible.moeny.ai
- name: moeny-radicale01
uri: radicale.moeny.ai
- name: moeny-rocketchat01
uri: rocketchat.moeny.ai
- name: moeny-zabbix01
uri: zabbix.moeny.ai
config:
ssh:
native-ssh: true

View File

@ -0,0 +1,181 @@
---
# Plan to Create an Alpine VM (alpine::create_vm)
parameters:
target_host:
type: String
description: "Target host to create the VM on"
default: "vortex"
## Main Configurations
vm_name:
type: String
description: "Name of the VM"
default: "vm-template-staging"
# Network Configuration
hostname:
type: String
description: "Hostname of the VM"
default: "vm-template-staging"
network:
type: String
description: "Network to connect the VM to"
default: "wan-verizon"
ip_with_cidr:
type: String
description: "Public IP of the VM"
default: "100.40.223.190/24"
gateway_ip:
type: String
description: "Gateway IP for the VM"
default: "100.40.223.1"
# Define Based on Whether Public or Internal VM
iso_path:
type: String
description: "Path to the ISO file"
default: "/mnt/nfs/kvm-isos/iso-build/alpine-autoinstall-wan_verizon.iso"
staging_ip:
type: String
description: "Staging IP"
default: "100.40.223.190"
## Optional Configurations
# Zabbix
install_zabbix_bool:
type: Boolean
description: "Whether to install Zabbix on the VM"
default: true
# Docker
install_docker_bool:
type: Boolean
description: "Whether to install Docker on the VM"
default: true
# DNS
add_a_record_bool:
type: Boolean
description: "Whether to add a DNS A record for the VM"
default: false
dns_hostname:
type: String
description: "Hostname for the DNS A record"
default: "vm-template-staging"
dns_ttl:
type: Integer
description: "TTL for the DNS A record"
default: 3600
## Rarely Changed Configurations
# VM Specifications
ram:
type: Integer
description: "Amount of RAM in MB"
default: 8192
vcpus:
type: Integer
description: "Number of virtual CPUs"
default: 4
disk_size:
type: Integer
description: "Size of the disk in GB"
default: 100
disk_path:
type: String
description: "Base path for disk images"
default: "/mnt/nfs/moeny-images"
os_variant:
type: String
description: "OS variant for the VM"
default: "alpinelinux3.20"
# Rarely Changed Network Configuration
dhcp:
type: Boolean
description: "Enable DHCP on the VM"
default: false
nameserver1:
type: String
description: "Primary nameserver for the VM"
default: "8.8.8.8"
nameserver2:
type: String
description: "Secondary nameserver for the VM"
default: "8.8.4.4"
nameserver3:
type: String
description: "Tertiary nameserver for the VM"
default: "1.1.1.1"
steps:
- name: check_ip_availability
description: Check if the target IP is already in use
task: common::check_ip_availability
targets: localhost
parameters:
network: $network
- name: create_vm
task: alpine::create_vm
targets: $target_host
parameters:
iso_path: $iso_path
vm_name: $vm_name
ram: $ram
vcpus: $vcpus
disk_size: $disk_size
disk_path: "${disk_path}/${vm_name}.qcow2"
network: $network
os_variant: $os_variant
- name: install_alpine
description: Install Alpine OS on the VM
task: alpine::install_alpine
targets: localhost
parameters:
vm_name: $vm_name
disk_path: "${disk_path}/${vm_name}.qcow2"
staging_ip: $staging_ip
gateway_ip: $gateway_ip
- name: install_packages
description: Install Packages on the VM
task: alpine::install_packages
targets: localhost
parameters:
staging_ip: $staging_ip
- name: install_zabbix
description: Install Zabbix on the VM
task: alpine::install_zabbix
targets: localhost
parameters:
install_zabbix_bool: $install_zabbix_bool
staging_ip: $staging_ip
- name: install_docker
description: Install Docker on the VM
task: alpine::install_docker
targets: localhost
parameters:
install_docker_bool: $install_docker_bool
staging_ip: $staging_ip
- name: system_setup
task: alpine::system_setup
targets: localhost
parameters:
ip_with_cidr: $ip_with_cidr
hostname: $hostname
dhcp: $dhcp
gateway_ip: $gateway_ip
nameserver1: $nameserver1
nameserver2: $nameserver2
nameserver3: $nameserver3
staging_ip: $staging_ip
- name: add_dns_a_record
description: Add a DNS A record for the VM
task: common::add_dns_a_record
targets: localhost
parameters:
add_a_record_bool: $add_a_record_bool
ip_with_cidr: $ip_with_cidr
dns_hostname: $dns_hostname
dns_ttl: $dns_ttl
return:
message: "VM ${vm_name} created and updated successfully!"

View File

@ -0,0 +1,34 @@
#!/bin/bash
# Task to Create an Alpine VM (alpine::create_vm)
# Input Variables
ISO_PATH=$PT_iso_path
VM_NAME=$PT_vm_name
RAM=$PT_ram
VCPUS=$PT_vcpus
DISK_SIZE=$PT_disk_size
DISK_PATH=$PT_disk_path
NETWORK=$PT_network
OS_VARIANT=$PT_os_variant
# Create VM disk if not already exists
if [ ! -f "$DISK_PATH" ]; then
qemu-img create -f qcow2 "$DISK_PATH" "$DISK_SIZE"G > /dev/null 2>&1
fi
# Create VM
virt-install \
--name "$VM_NAME" \
--ram "$RAM" \
--vcpus "$VCPUS" \
--os-variant "$OS_VARIANT" \
--disk path="$DISK_PATH",format=qcow2 \
--cdrom "$ISO_PATH" \
--network network="$NETWORK" \
--graphics vnc \
--noautoconsole \
--autostart \
--wait -1 \
> /dev/null 2>&1 &
sleep 25

View File

@ -0,0 +1,59 @@
#!/bin/bash
# Task to Install Alpine on a VM (alpine::install_alpine)
# Input Variables
VM_NAME="${PT_vm_name}"
DISK_PATH="${PT_disk_path}"
STAGING_IP="${PT_staging_ip}"
GATEWAY_IP="${PT_gateway_ip}"
# Wait for VM to be accessible via SSH
while ! ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@${STAGING_IP} "echo 'VM is accessible'"; do
sleep 5
done
# Create autoinstall answer file directly on VM
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "cat > /tmp/alpine-answers << 'EOF'
KEYMAPOPTS=\"us us\"
HOSTNAMEOPTS=\"-n vm-template-staging\"
INTERFACESOPTS=\"auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address ${STAGING_IP}
netmask 255.255.255.0
gateway ${GATEWAY_IP}
\"
DNSOPTS=\"-n 8.8.8.8 8.8.4.4\"
TIMEZONEOPTS=\"-z UTC\"
PROXYOPTS=\"none\"
APKREPOSOPTS=\"-1\"
USEROPTS=\"-a -u moeny\"
USERSSHKEY=\"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCRMJNdI/n/7xYN65zHFN8hlRSDg5OPJ12AwOsUyP8OmKCQTapoVQ/suvjaUTCtt8o28QNIQm1vAD03hFNzVJn6F6FJu9vUbR+YqlmzmzGJXB6sWWTEnc9/GsVvLoculuzFYfa2qU9xFbuUTtqFRu6qor82TPAhy/yVWzIvRxlfuxKLpdU9paKiV+WtCkSpVoBgIH6soBE1swMX4ILIOGeFTrmCdBac4K1Bs0OarKtShR6PHdNiqPlwpCeQQDZD8ops69yBMc0t6poFZC9FYSj7arJEWvZN9YtUr+PJiYZQc+gIG4enPW1Zf4FEkXXvH/t6RaYMq9w/P5lIUNOVe169\"
ROOTSSHKEY=\"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCRMJNdI/n/7xYN65zHFN8hlRSDg5OPJ12AwOsUyP8OmKCQTapoVQ/suvjaUTCtt8o28QNIQm1vAD03hFNzVJn6F6FJu9vUbR+YqlmzmzGJXB6sWWTEnc9/GsVvLoculuzFYfa2qU9xFbuUTtqFRu6qor82TPAhy/yVWzIvRxlfuxKLpdU9paKiV+WtCkSpVoBgIH6soBE1swMX4ILIOGeFTrmCdBac4K1Bs0OarKtShR6PHdNiqPlwpCeQQDZD8ops69yBMc0t6poFZC9FYSj7arJEWvZN9YtUr+PJiYZQc+gIG4enPW1Zf4FEkXXvH/t6RaYMq9w/P5lIUNOVe169\"
SSHDOPTS=\"-c openssh\"
NTPOPTS=\"-c chrony\"
DISKOPTS=\"-m sys /dev/vda\"
EOF"
# Run installation commands over SSH
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "echo 'y' | setup-alpine -e -f /tmp/alpine-answers"
# Wait for installation to complete
sleep 45
# Reboot via SSH
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "reboot"
# Wait for VM to come back up
sleep 30
# Verify installation by trying to SSH
if ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@${STAGING_IP} "echo 'VM is running'"; then
echo '{"status": "success", "message": "Alpine installation completed successfully"}'
exit 0
else
echo '{"status": "failure", "message": "Failed to install Alpine"}'
exit 1
fi

View File

@ -0,0 +1,31 @@
#!/bin/bash
# Task to Install Docker on an Alpine VM (alpine::install_docker)
# Input Variables
INSTALL_DOCKER="${PT_install_docker_bool}"
STAGING_IP="${PT_staging_ip}"
# Check if Docker installation is requested
if [ "$INSTALL_DOCKER" != "true" ]; then
# Output JSON that Bolt will understand
echo '{"status": "skipped", "message": "Docker installation not requested, skipping..."}'
exit 0
fi
# Update package list and install Docker
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "apk update && apk add --no-cache docker docker-cli docker-cli-compose"
# Add current user to docker group
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "addgroup moeny docker"
# Start and enable Docker service
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-service docker start && rc-update add docker default"
# Verify installation
if ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "docker --version" > /dev/null 2>&1; then
echo '{"status": "success", "message": "Docker installed successfully"}'
exit 0
else
echo '{"status": "failure", "message": "Docker installation failed"}'
exit 1
fi

View File

@ -0,0 +1,26 @@
#!/bin/bash
# Task to Install Packages on an Alpine VM (alpine::install_packages)
# Input Variables
STAGING_IP="${PT_staging_ip}"
# Uncomment to enable community repository
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sed -i '3s/^#//' /etc/apk/repositories"
# Install required packages
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "apk update && apk add --no-cache vim git fping htop sudo bash mtr rsync tmux"
# Change default shell to bash
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sed -i -E '/^(root|moeny):/ s:/bin/sh$:/bin/bash:' /etc/passwd"
# Set mouse for vim
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sed -i '1i let skip_defaults_vim = 1\nset mouse=' /etc/vim/vimrc"
# Add moeny user to sudo group
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "addgroup sudo;addgroup moeny sudo"
# Set no password to sudo group
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "echo '%sudo ALL=(ALL) NOPASSWD: ALL' | tee -a /etc/sudoers.d/nopasswd_sudo_group"
# Aliases for ll and la
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sudo sed -i '1i # set ls -l and ls -a aliases\nalias ll='\''ls -l'\''\nalias la='\''ls -a'\''\n' /etc/bash/bashrc"

View File

@ -0,0 +1,34 @@
#!/bin/bash
# Task to Install Zabbix on an Alpine VM (alpine::install_zabbix)
# Input Variables
INSTALL_ZABBIX="${PT_install_zabbix_bool}"
STAGING_IP="${PT_staging_ip}"
# Check if Zabbix installation is requested
if [ "$INSTALL_ZABBIX" != "true" ]; then
echo '{"status": "skipped", "message": "Zabbix installation not requested, skipping..."}'
exit 0
fi
# Install zabbix-agent2
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "apk add zabbix-agent2"
# Configure zabbix-agent2
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sed -i -e 's/^Server=127\.0\.0\.1/# Server=127.0.0.1/' -e 's/^ServerActive=127\.0\.0\.1/ServerActive=zabbix.moeny.ai,zabbix.moeny.internal/' /etc/zabbix/zabbix_agent2.conf"
# Add zabbix-agent2 to default runlevel
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-update add zabbix-agent2 default"
# Start zabbix-agent2
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-service zabbix-agent2 start"
# Verify installation
status_output=$(ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-service zabbix-agent2 status")
if echo "$status_output" | grep -q "status: started"; then
echo '{"status": "success", "message": "Zabbix agent installed and running", "output": "'"$status_output"'"}'
exit 0
else
echo '{"status": "failure", "message": "Zabbix agent installation failed", "output": "'"$status_output"'"}'
exit 1
fi

View File

@ -0,0 +1,37 @@
#!/bin/bash
# Task to Install Zabbix on an Alpine VM (alpine::post_install_zabbix)
# Install zabbix-agent2
sudo apk add zabbix-agent2 || {
echo '{"status": "failure", "message": "Failed to install zabbix-agent2", "output": "'"$(sudo apk add zabbix-agent2 2>&1)"'"}'
exit 1
}
# Configure zabbix-agent2
sudo sed -i -e 's/^Server=127\.0\.0\.1/# Server=127.0.0.1/' -e 's/^ServerActive=127\.0\.0\.1/ServerActive=10.44.0.5,zabbix.moeny.ai/' /etc/zabbix/zabbix_agent2.conf || {
echo '{"status": "failure", "message": "Failed to configure zabbix-agent2", "output": "Configuration failed"}'
exit 1
}
# Add zabbix-agent2 to default runlevel
sudo rc-update add zabbix-agent2 default || {
echo '{"status": "failure", "message": "Failed to add zabbix-agent2 to default runlevel", "output": "'"$(sudo rc-update add zabbix-agent2 default 2>&1)"'"}'
exit 1
}
# Start zabbix-agent2 with debug output
service_start=$(sudo rc-service zabbix-agent2 start 2>&1)
if [ $? -ne 0 ]; then
echo '{"status": "failure", "message": "Failed to start zabbix-agent2", "output": "'"$service_start"'"}'
exit 1
fi
# Verify installation with more detailed status check
status_output=$(sudo rc-service zabbix-agent2 status 2>&1)
if echo "$status_output" | grep -q "status: started"; then
echo '{"status": "success", "message": "Zabbix agent installed and running", "output": "'"$status_output"'"}'
exit 0
else
echo '{"status": "failure", "message": "Zabbix agent status check failed", "output": "'"$status_output"'"}'
exit 1
fi

View File

@ -0,0 +1,45 @@
{
"description": "Configures system network settings using Alpine Linux network configuration",
"parameters": {
"ip_with_cidr": {
"type": "String",
"description": "IP address for the VM",
"default": "100.40.223.190/24"
},
"hostname": {
"type": "String",
"description": "Hostname for the VM",
"default": "vm-template-staging"
},
"dhcp": {
"type": "Boolean",
"description": "Whether to use DHCP for network configuration",
"default": false
},
"gateway_ip": {
"type": "String",
"description": "Gateway IP address",
"default": "100.40.223.1"
},
"nameserver1": {
"type": "String",
"description": "Primary DNS nameserver",
"default": "8.8.8.8"
},
"nameserver2": {
"type": "String",
"description": "Secondary DNS nameserver",
"default": "8.8.4.4"
},
"nameserver3": {
"type": "String",
"description": "Tertiary DNS nameserver",
"default": "1.1.1.1"
},
"staging_ip": {
"type": "String",
"description": "Staging IP address",
"default": "100.40.223.190"
}
}
}

View File

@ -0,0 +1,68 @@
#!/bin/bash
# Task to Configure the System on Alpine (alpine::system_setup)
# Using Bolt's environment variables
IP="${PT_ip_with_cidr}"
HOSTNAME="${PT_hostname}"
DHCP="${PT_dhcp}"
GATEWAY_IP="${PT_gateway_ip}"
NAMESERVER1="${PT_nameserver1}"
NAMESERVER2="${PT_nameserver2}"
NAMESERVER3="${PT_nameserver3}"
STAGING_IP="${PT_staging_ip}"
# Check if all required parameters are provided
if [ -z "$IP" ] || [ -z "$HOSTNAME" ] || [ -z "$DHCP" ] || [ -z "$GATEWAY_IP" ] || [ -z "$NAMESERVER1" ] || [ -z "$NAMESERVER2" ] || [ -z "$NAMESERVER3" ]; then
echo '{"status": "failure", "message": "Missing required parameters. All parameters must be provided."}'
exit 1
fi
# Install required packages
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "apk add --no-cache iptables"
# Configure iptables rules
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "iptables -A INPUT -p tcp --dport 22 -s 100.40.223.128/26 -j ACCEPT && \
iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -j ACCEPT && \
iptables -A INPUT -p tcp --dport 22 -j DROP"
# Save iptables rules
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-service iptables save"
# Configure network
if [ "$DHCP" = "false" ]; then
# Create network configuration directly on VM
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "cat > /etc/network/interfaces << 'EOF'
auto eth0
iface eth0 inet static
address ${IP}
gateway ${GATEWAY_IP}
EOF"
fi
# Configure DNS directly on VM
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "cat > /etc/resolv.conf << 'EOF'
nameserver ${NAMESERVER1}
nameserver ${NAMESERVER2}
nameserver ${NAMESERVER3}
EOF"
# Set hostname
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "echo '${HOSTNAME}' > /etc/hostname"
# Update /etc/hosts
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "sed -i 's/127.0.0.1.*/127.0.0.1\t${HOSTNAME}/' /etc/hosts"
# Enable and start iptables service
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rc-update add iptables default && rc-service iptables start"
# Generate new SSH host keys
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "rm /etc/ssh/ssh_host_* && \
ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key -N \"\" && \
ssh-keygen -t ecdsa -b 521 -f /etc/ssh/ssh_host_ecdsa_key -N \"\" && \
ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N \"\""
echo '{"status": "success", "message": "System configuration completed successfully"}'
# Reboot the system
ssh -o StrictHostKeyChecking=no root@${STAGING_IP} "nohup sh -c '(sleep 2 && reboot) &' > /dev/null 2>&1"
exit 0

View File

@ -0,0 +1,35 @@
#!/bin/bash
# This script adds a DNS A record to the DNS server zone file (common::add_dns_a_record)
# Bolt environment variables
ADD_A_RECORD="${PT_add_a_record_bool}"
IP="${PT_ip_with_cidr}"
HOSTNAME="${PT_dns_hostname}"
TTL="${PT_dns_ttl}"
# Check if Docker installation is requested
if [ "$ADD_A_RECORD" != "true" ]; then
echo '{"status": "skipped", "message": "A Record addition not requested, skipping..."}'
exit 0
fi
# Check if required parameters are provided
if [ -z "$IP" ] || [ -z "$HOSTNAME" ] || [ -z "$TTL" ]; then
echo '{"status": "failure", "message": "Error: Both ip_with_cidr, dns_hostname and ttl parameters must be provided"}'
exit 1
fi
# Create DNS A record
IP_ADDRESS=$(echo ${IP} | cut -d'/' -f1)
nsupdate -k "./keys/tsig.key" << EOF
server ns1.moeny.ai
debug yes
zone moeny.ai
update add ${HOSTNAME}.moeny.ai ${TTL} A ${IP_ADDRESS}
send
EOF
# Force zone file update on DNS server
ssh moeny@ns1.moeny.ai "sudo rndc sync moeny.ai"
echo '{"status": "success", "message": "A Record successfully added."}'

View File

@ -0,0 +1,9 @@
{
"description": "Check if the target IP is already in use",
"parameters": {
"network": {
"type": "String",
"description": "Network type (internal-moeny or wan-verizon)"
}
}
}

View File

@ -0,0 +1,24 @@
#!/bin/bash
# This script checks the availability of an IP address (common::check_ip_availability)
# Extract parameters
network="$PT_network"
# Determine which IP to ping based on network
if [ "$network" = "internal-moeny" ]; then
ping_ip="10.44.0.250"
elif [ "$network" = "wan-verizon" ]; then
ping_ip="100.40.223.190"
else
echo "{\"status\": \"error\", \"message\": \"Unsupported network type: $network. Must be either internal-moeny or wan-verizon.\"}"
exit 1
fi
# Ping the target IP with 3 second timeout
if ping -c 1 -W 3 "$ping_ip" > /dev/null 2>&1; then
echo "{\"status\": \"error\", \"message\": \"IP $ping_ip is already in use. Please choose a different IP.\"}"
exit 1
else
echo "{\"status\": \"success\", \"message\": \"IP $ping_ip is available.\"}"
exit 0
fi

View File

@ -0,0 +1,24 @@
#!/bin/bash
# This script deletes a DNS A record from the DNS server zone file (common::delete_dns_a_record)
# Bolt environment variables
HOSTNAME="${PT_dns_hostname}"
# Check if required parameters are provided
if [ -z "$HOSTNAME" ]; then
echo '{"status": "failure", "message": "Error: dns_hostname parameter must be provided"}'
exit 1
fi
# Delete DNS A record
nsupdate -k "./keys/tsig.key" << EOF
server ns1.moeny.ai
debug yes
zone moeny.ai
update delete ${HOSTNAME}.moeny.ai A
send
EOF
# Force zone file update on DNS server
ssh moeny@ns1.moeny.ai "sudo rndc sync moeny.ai"
echo '{"status": "success", "message": "A Record successfully deleted."}'

View File

@ -0,0 +1,28 @@
#!/bin/bash
CONFIG_FILE="/etc/zabbix/zabbix_agent2.conf"
SEARCH_STRING="10.44.0.5"
REPLACE_STRING="zabbix.moeny.internal"
# Check if the configuration file exists using sudo
if ! sudo test -f "$CONFIG_FILE"; then
echo "Error: File $CONFIG_FILE not found."
exit 1
fi
# Check if the search string exists in the file using sudo
if sudo grep -q -F "$SEARCH_STRING" "$CONFIG_FILE"; then
# Escape dots in the search string for sed
ESCAPED_SEARCH_STRING=$(echo "$SEARCH_STRING" | sed 's/\./\\./g')
# Replace the string in place using sudo without creating a backup
sudo sed -i "s/$ESCAPED_SEARCH_STRING/$REPLACE_STRING/g" "$CONFIG_FILE"
if [ $? -ne 0 ]; then
echo "Error: Failed to replace string."
exit 1
fi
echo "Replacement done."
else
echo "No replacement needed. The string '$SEARCH_STRING' was not found."
fi

View File

@ -0,0 +1,161 @@
---
# Plan to Create an Ubuntu VM (ubuntu::create_vm)
parameters:
target_host:
type: String
description: "Target host to create the VM on"
default: "vortex"
## Main Configurations
vm_name:
type: String
description: "Name of the VM"
default: "vm-template-staging"
# Network Configuration
hostname:
type: String
description: "Hostname of the VM"
default: "vm-template-staging"
network:
type: String
description: "Network to connect the VM to"
default: "wan-verizon"
ip_with_cidr:
type: String
description: "Public IP of the VM"
default: "100.40.223.190/24"
gateway_ip:
type: String
description: "Gateway IP for the VM"
default: "100.40.223.1"
# Define Based on Whether Public or Internal VM
iso_path:
type: String
description: "Path to the ISO file"
default: "/mnt/nfs/kvm-isos/iso-build/ubuntu-22.04-autoinstall-wan_verizon.iso"
staging_ip:
type: String
description: "Target VM for post-installation tasks as either public or internal"
default: "public"
## Optional Configurations
# Zabbix
install_zabbix_bool:
type: Boolean
description: "Whether to install Zabbix on the VM"
default: true
# Docker
install_docker_bool:
type: Boolean
description: "Whether to install Docker on the VM"
default: true
# DNS
add_a_record_bool:
type: Boolean
description: "Whether to add a DNS A record for the VM"
default: false
dns_hostname:
type: String
description: "Hostname for the DNS A record"
default: "vm-template-staging"
dns_ttl:
type: Integer
description: "TTL for the DNS A record"
default: 3600
## Rarely Changed Configurations
# VM Specifications
ram:
type: Integer
description: "Amount of RAM in MB"
default: 8192
vcpus:
type: Integer
description: "Number of virtual CPUs"
default: 4
disk_size:
type: Integer
description: "Size of the disk in GB"
default: 100
disk_path:
type: String
description: "Base path for disk images"
default: "/mnt/nfs/moeny-images"
os_variant:
type: String
description: "OS variant for the VM"
default: "ubuntu22.04"
# Rarely Changed Network Configuration
dhcp:
type: Boolean
description: "Enable DHCP on the VM"
default: false
nameserver1:
type: String
description: "Primary nameserver for the VM"
default: "8.8.8.8"
nameserver2:
type: String
description: "Secondary nameserver for the VM"
default: "8.8.4.4"
nameserver3:
type: String
description: "Tertiary nameserver for the VM"
default: "1.1.1.1"
steps:
- name: check_ip_availability
description: Check if the target IP is already in use
task: common::check_ip_availability
targets: localhost
parameters:
network: $network
- name: create_vm
task: ubuntu::create_vm
targets: $target_host
parameters:
iso_path: $iso_path
vm_name: $vm_name
ram: $ram
vcpus: $vcpus
disk_size: $disk_size
disk_path: "${disk_path}/${vm_name}.qcow2"
network: $network
os_variant: $os_variant
- name: install_zabbix
description: Install Zabbix on the VM
task: ubuntu::install_zabbix
targets: $staging_ip
parameters:
install_zabbix_bool: $install_zabbix_bool
- name: install_docker
description: Install Docker on the VM
task: ubuntu::install_docker
targets: $staging_ip
parameters:
install_docker_bool: $install_docker_bool
- name: system_setup
task: ubuntu::system_setup
targets: $staging_ip
parameters:
ip_with_cidr: $ip_with_cidr
hostname: $hostname
dhcp: $dhcp
gateway_ip: $gateway_ip
nameserver1: $nameserver1
nameserver2: $nameserver2
nameserver3: $nameserver3
- name: add_dns_a_record
description: Add a DNS A record for the VM
task: common::add_dns_a_record
targets: localhost
parameters:
add_a_record_bool: $add_a_record_bool
ip_with_cidr: $ip_with_cidr
dns_hostname: $dns_hostname
dns_ttl: $dns_ttl
return:
message: "VM ${vm_name} created and updated successfully!"

View File

@ -0,0 +1,44 @@
{
"description": "Creates a new VM using virt-install",
"parameters": {
"iso_path": {
"type": "String",
"description": "Path to the autoinstall ISO",
"default": "/mnt/nfs/kvm-isos/iso-build/ubuntu-22.04-autoinstall.iso"
},
"vm_name": {
"type": "String",
"description": "Name of the VM",
"default": "vm-template-staging"
},
"ram": {
"type": "Integer",
"description": "Amount of RAM in MB",
"default": 2048
},
"vcpus": {
"type": "Integer",
"description": "Number of virtual CPUs",
"default": 4
},
"disk_size": {
"type": "Integer",
"description": "Size of the VM disk in GB",
"default": 100
},
"disk_path": {
"type": "String",
"description": "Base path for disk images",
"default": "/mnt/nfs/kvm-images/vm-template-staging.qcow2"
},
"network": {
"type": "String",
"description": "Network to connect the VM to",
"default": "wan-verizon"
},
"os_variant": {
"type": "String",
"description": "OS variant for the VM"
}
}
}

View File

@ -0,0 +1,33 @@
#!/bin/bash
# Task to Create an Ubuntu VM (ubuntu::create_vm)
# Input Variables
ISO_PATH=$PT_iso_path
VM_NAME=$PT_vm_name
RAM=$PT_ram
VCPUS=$PT_vcpus
DISK_SIZE=$PT_disk_size
DISK_PATH=$PT_disk_path
NETWORK=$PT_network
OS_VARIANT=$PT_os_variant
# Create VM disk if not already exists
if [ ! -f "$DISK_PATH" ]; then
qemu-img create -f qcow2 "$DISK_PATH" "$DISK_SIZE"G > /dev/null 2>&1
fi
# Create VM
virt-install \
--name "$VM_NAME" \
--ram "$RAM" \
--vcpus "$VCPUS" \
--os-variant "$OS_VARIANT" \
--disk path="$DISK_PATH",format=qcow2 \
--cdrom "$ISO_PATH" \
--network network="$NETWORK" \
--graphics vnc \
--noautoconsole \
--autostart \
--wait -1
sleep 45

View File

@ -0,0 +1,50 @@
#!/bin/bash
# Task to Install Docker on Ubuntu (ubuntu::install_docker)
# Input Variables
INSTALL_DOCKER="${PT_install_docker_bool}"
# Check if Docker installation is requested
if [ "$INSTALL_DOCKER" != "true" ]; then
# Output JSON that Bolt will understand
echo '{"status": "skipped", "message": "Docker installation not requested, skipping..."}'
exit 0
fi
# Update package list and install prerequisites
sudo apt-get update
sudo apt-get install -y \
ca-certificates \
curl \
gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update package list again and install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Add current user to docker group
sudo usermod -aG docker "$USER"
# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Verify installation
if docker --version > /dev/null 2>&1; then
echo "Docker installed successfully"
exit 0
else
echo "Docker installation failed"
exit 1
fi

View File

@ -0,0 +1,41 @@
#!/bin/bash
# Task to Install Zabbix on Ubuntu (ubuntu::install_zabbix)
# Input Variables
INSTALL_ZABBIX="${PT_install_zabbix_bool}"
# Check if Zabbix installation is requested
if [ "$INSTALL_ZABBIX" != "true" ]; then
echo '{"status": "skipped", "message": "Zabbix installation not requested, skipping..."}'
exit 0
fi
# Download the Zabbix release package
sudo wget -O /tmp/zabbix-release.deb https://repo.zabbix.com/zabbix/7.2/release/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest_7.2+ubuntu22.04_all.deb
# install the zabbix release package
sudo dpkg -i /tmp/zabbix-release.deb
# Update the package list
sudo apt update
# Install the Zabbix agent
sudo apt install -y zabbix-agent2
# Configure the Zabbix agent
sudo sed -i -e 's/^Server=127\.0\.0\.1/# Server=127.0.0.1/' -e 's/^ServerActive=127\.0\.0\.1/ServerActive=zabbix.moeny.ai,zabbix.moeny.internal/' -e 's/^LogFileSize=0/LogFileSize=1/' -e 's/^Hostname=Zabbix server/# Hostname=Zabbix server/' -e 's/^# HostnameItem=system\.hostname/HostnameItem=system.hostname/' /etc/zabbix/zabbix_agent2.conf
# Enable the Zabbix agent
sudo systemctl enable zabbix-agent2
# Start the Zabbix agent
sudo systemctl start zabbix-agent2
# Verify installation
if sudo systemctl is-active --quiet zabbix-agent2; then
echo "Zabbix agent installed and running successfully"
exit 0
else
echo "Zabbix agent installation failed or service is not running"
exit 1
fi

View File

@ -0,0 +1,32 @@
#!/bin/bash
# Task to Install Zabbix on Ubuntu (ubuntu::post_install_zabbix)
# Download the Zabbix release package
sudo wget -O /tmp/zabbix-release.deb https://repo.zabbix.com/zabbix/7.2/release/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest_7.2+ubuntu22.04_all.deb
# install the zabbix release package
sudo dpkg -i /tmp/zabbix-release.deb
# Update the package list
sudo apt update
# Install the Zabbix agent
sudo apt install -y zabbix-agent2
# Configure the Zabbix agent
sudo sed -i -e 's/^Server=127\.0\.0\.1/# Server=127.0.0.1/' -e 's/^ServerActive=127\.0\.0\.1/ServerActive=10.44.0.5,zabbix.moeny.ai/' -e 's/^LogFileSize=0/LogFileSize=1/' -e 's/^Hostname=Zabbix server/# Hostname=Zabbix server/' -e 's/^# HostnameItem=system\.hostname/HostnameItem=system.hostname/' /etc/zabbix/zabbix_agent2.conf
# Enable the Zabbix agent
sudo systemctl enable zabbix-agent2
# Start the Zabbix agent
sudo systemctl start zabbix-agent2
# Verify installation
if sudo systemctl is-active --quiet zabbix-agent2; then
echo "Zabbix agent installed and running successfully"
exit 0
else
echo "Zabbix agent installation failed or service is not running"
exit 1
fi

View File

@ -0,0 +1,39 @@
{
"description": "Configures system network settings using Ubuntu netplan",
"parameters": {
"ip_with_cidr": {
"type": "String",
"description": "IP address for the VM"
},
"hostname": {
"type": "String",
"description": "Hostname for the VM",
"default": "vm-template-staging"
},
"dhcp": {
"type": "Boolean",
"description": "Whether to use DHCP for network configuration",
"default": false
},
"gateway_ip": {
"type": "String",
"description": "Gateway IP address",
"default": "100.40.223.1"
},
"nameserver1": {
"type": "String",
"description": "Primary DNS nameserver",
"default": "8.8.8.8"
},
"nameserver2": {
"type": "String",
"description": "Secondary DNS nameserver",
"default": "8.8.4.4"
},
"nameserver3": {
"type": "String",
"description": "Tertiary DNS nameserver",
"default": "1.1.1.1"
}
}
}

View File

@ -0,0 +1,65 @@
#!/bin/bash
# Task to Configure the System on Ubuntu (ubuntu::system_setup)
# Using Bolt's environment variables
IP="${PT_ip_with_cidr}"
HOSTNAME="${PT_hostname}"
DHCP="${PT_dhcp}"
GATEWAY="${PT_gateway_ip}"
NAMESERVER1="${PT_nameserver1}"
NAMESERVER2="${PT_nameserver2}"
NAMESERVER3="${PT_nameserver3}"
# Check if all required parameters are provided
if [ -z "$IP" ] || [ -z "$HOSTNAME" ] || [ -z "$DHCP" ] || [ -z "$GATEWAY" ] || [ -z "$NAMESERVER1" ] || [ -z "$NAMESERVER2" ] || [ -z "$NAMESERVER3" ]; then
echo "Missing required parameters. All parameters must be provided."
exit 1
fi
# Configure and install iptables-persistent
sudo DEBIAN_FRONTEND=noninteractive apt-get -y install iptables-persistent
# Restrict SSH access
sudo iptables -A INPUT -p tcp --dport 22 -s 100.40.223.128/26 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 22 -j DROP
# Use netfilter-persistent to save rules instead of direct file writing
sudo netfilter-persistent save
# Create the new netplan configuration
sudo tee /etc/cloud/cloud.cfg.d/90-installer-network.cfg << EOL
network:
version: 2
ethernets:
enp1s0:
dhcp4: ${DHCP}
EOL
# If DHCP is false, add static IP configuration
if [ "$DHCP" = "false" ]; then
sudo tee -a /etc/cloud/cloud.cfg.d/90-installer-network.cfg << EOL
addresses:
- ${IP}
routes:
- to: default
via: ${GATEWAY}
nameservers:
addresses: [${NAMESERVER1}, ${NAMESERVER2}, ${NAMESERVER3}]
EOL
fi
# Set the hostname
sudo hostnamectl set-hostname "${HOSTNAME}"
echo "${HOSTNAME}" | sudo tee /etc/hostname > /dev/null
# Update /etc/hosts
sudo sed -i "s/127.0.1.1.*/127.0.1.1\t${HOSTNAME}/" /etc/hosts
echo "System configuration completed successfully"
# Apply network configuration in the background and exit before it takes effect
# nohup bash -c "(sleep 2 && sudo netplan apply) &" > /dev/null 2>&1
# exit 0
nohup bash -c "(sleep 2 && sudo reboot) &" > /dev/null 2>&1
exit 0