Ansible su Debian 12 e SysLinuxOS

Ansible su Debian 12 e SysLinuxOS

Ansible su Debian 12 e SysLinuxOS

 

Guida su come installare Ansible su Debian 12 e SysLnuxOS 12. Ansible è un tool di automazione open source che viene utilizzato per automatizzare le attività IT, come la gestione della configurazione, il provisioning, il deployment e l’orchestrazione.

**Alcuni esempi di utilizzo di Ansible**

* Gestione della configurazione di server e macchine virtuali
* Deployment di applicazioni e software
* Orchestrazione di processi IT
* Provisionig di infrastrutture cloud

Installazione

Si può installare tramite apt:

$ sudo apt update
$ sudo apt install ansible -y

oppure tramite pip:

$ sudo apt install python3 python3-pip -y
$ pip install ansible --break-system-packages

in questo ultimo caso il PATH sarà in .local/bin/ansible, quindi:

$ export PATH=$PATH:/home/$USER/.local/bin

per rendere definitiva la modifica inserire il comando in .bashrc. Nel caso non fosse presente:

$ nano ~/.bashrc

ed inserire:

PATH=$PATH:~/bin
export PATH=$PATH:/home/$USER/.local/bin

quindi testare:

$ ansible --version

ansible [core 2.15.5]
config file = None
configured module search path = ['/home/edmond/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/edmond/.local/lib/python3.11/site-packages/ansible
ansible collection location = /home/edmond/.ansible/collections:/usr/share/ansible/collections
executable location = /home/edmond/.local/bin/ansible
python version = 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True

Ansible deve essere installato su uno dei nodi. Il nodo di gestione è noto come Control Node. Questo nodo avrà il file Ansible Playbook. Questo è un file YAML che contiene i passaggi che l’utente desidera eseguire su una o più macchine normalmente denominate managed nodes.

Prerequisiti

Per questa guida ho usato 3 server:

Esempio ip_address
Control Node (SysLinuxOS 12) 192.168.1.168
Managed Node 1 (server 1 Debian 12) 192.168.1.200
Managed Node 2 (server 2 Raspberry Py OS 12) 192.168.1.251
Creazione Hosts Inventory file

questo file si occuperà del collegamento con i managed node:

$ mkdir ~/project
$ nano  ~/project/hosts

ed inserire ip ed username dei nodi da automatizzare:

[servers]
server1 ansible_host=192.168.1.200 ansible_user=edmond ansible_ssh_port=22
server2 ansible_host=192.168.1.251 ansible_user=edmond ansible_ssh_port=22

dopo se non si ha l’accesso ssh, si va a creare una chiave, che verrà copiata sui 2 server:

Creazione e copia chiave ssh
$ sudo su
# ssh-keygen
# ssh-copy-id root@192.168.1.200
# ssh-copy-id root@192.168.1.25
Utilizzo moduli Ansible

Sintassi:

$ ansible -i <host_file> -m <module> <host>

Dove:

  • -i ~/hosts: contiene la lista degli hosts
  • -m: specifica il modulo come ping,  shell ecc.ecc.
  • <host>: Ansible hosts dove lanciare i moduli

Utilizzare ping usando ansible ping module:

$ ansible -i ~/project/hosts -m ping all

output ping:

edmond@edmondbox:~$ ansible -i ~/project/hosts -m ping all
server2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
server1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}

Utilizzo shell usando ansible shell module:

$ ansible -i ~/project/hosts -m shell -a "uptime" all

output uptime:

$ ansible -i ~/project/hosts -m shell -a "uptime" all

server2 | CHANGED | rc=0 >>
19:51:43 up 1 day, 3:00, 1 user, load average: 0.35, 0.11, 0.08
server1 | CHANGED | rc=0 >>
19:51:44 up 3:36, 1 user, load average: 0.00, 0.00, 0.00

output df:

$ ansible -i ~/project/hosts -m shell -a "df -h" all

server1 | CHANGED | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
udev 661M 0 661M 0% /dev
tmpfs 185M 1.8M 184M 1% /run
/dev/mmcblk0p2 117G 8.0G 103G 8% /
tmpfs 925M 0 925M 0% /dev/shm
tmpfs 5.0M 16K 5.0M 1% /run/lock
/dev/mmcblk0p1 510M 61M 450M 12% /boot/firmware
tmpfs 185M 0 185M 0% /run/user/1000
server2 | CHANGED | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
udev 1.6G 0 1.6G 0% /dev
tmpfs 380M 1.2M 379M 1% /run
/dev/mmcblk0p2 59G 11G 45G 19% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 16K 5.0M 1% /run/lock
/dev/mmcblk0p1 510M 61M 450M 12% /boot/firmware
tmpfs 380M 0 380M 0% /run/user/1000

output free:

$ ansible -i ~/project/hosts -m shell -a "free -m" all

server1 | CHANGED | rc=0 >>
total used free shared buff/cache available
Mem: 1848 732 126 13 1005 1115
Swap: 99 0 99
server2 | CHANGED | rc=0 >>
total used free shared buff/cache available
Mem: 3793 577 1916 45 1378 3215
Swap: 99 0 99Utilizzo modulo apt
Utilizzo modulo apt

Con il modulo apt, si possono utilizzare i classici comandi di apt update, apt upgrade, apt install ecc ecc. In questo caso useremo playbook.yaml.

1) apt update, apt upgrade, e in caso di nuovo kernel, reboot

$ nano ~/project/upgrade.yml

inserire:

---
- hosts: servers
become: yes
become_user: root
tasks:
- name: Update apt repo and cache on all Debian boxes
apt: update_cache=yes force_apt_get=yes cache_valid_time=3600

- name: Upgrade all packages on servers
apt: upgrade=dist force_apt_get=yes

- name: Check if a reboot is needed on all servers
register: reboot_required_file
stat: path=/var/run/reboot-required get_md5=no

- name: Reboot the box if kernel updated
reboot:
msg: "Reboot initiated by Ansible for kernel updates"
connect_timeout: 5
reboot_timeout: 300
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: uptime
when: reboot_required_file.stat.exists

comando apt upgrade:

$ ansible-playbook project/upgrade.yml -i project/hosts

output:

BECOME password:

PLAY [servers] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]
ok: [server1]

TASK [Update apt repo and cache on all Debian boxes] ***************************
changed: [server2]
changed: [server1]

TASK [Upgrade all packages on servers] *****************************************
ok: [server2]
ok: [server1]

TASK [Check if a reboot is needed on all servers] ******************************
ok: [server2]
ok: [server1]

TASK [Reboot the box if kernel updated] ****************************************
skipping: [server1]
skipping: [server2]

PLAY RECAP *********************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 
server2 : ok=4 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

2) Installazione singolo pacchetto bc

$ nano ~/project/package.yml

inserire:

- hosts: all
become: yes
tasks:
- name : Install the latest bc package
apt: name=bc state=latest update_cache=true

comando con opzione -K per eseguire il comando da root:

$ ansible-playbook project/package.yml -i project/hosts -K

output:

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]
ok: [server1]

TASK [Install the latest bc package] *******************************************
changed: [server2]
changed: [server1]

PLAY RECAP *********************************************************************
server1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
server2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

3) Installazione pacchetto 7zip con output debug managed nodes

$ nano ~/project/output.yml

inserire:

- hosts: all
become: yes
tasks:
- name: Capture the Output
apt: name=7zip state=present update_cache=true
register: apt_output
- debug: var=apt_output

comando:

$ ansible-playbook project/output.yml -i project/hosts -K

output:

 "(Reading database ... 65%",
"(Reading database ... 70%",
"(Reading database ... 75%",
"(Reading database ... 80%",
"(Reading database ... 85%",
"(Reading database ... 90%",
"(Reading database ... 95%",
"(Reading database ... 100%",
"(Reading database ... 76140 files and directories currently installed.)",
"Preparing to unpack .../7zip_22.01+dfsg-8_arm64.deb ...",
"Unpacking 7zip (22.01+dfsg-8) ...",
"Setting up 7zip (22.01+dfsg-8) ...",
"Processing triggers for man-db (2.11.2-2) ..."

4) Installazione multipla e verifica pacchetti installati

$ nano ~/project/packages.yml

inserire:

---
- hosts: all
become: yes
tasks:
- name: Update apt cache and make sure Wget, Curl and Terminator are installed
apt:
name: "{{ item }}"
update_cache: yes
loop:
- wget
- curl
- terminator

comando:

$ ansible-playbook project/packeges.yml -i project/hosts -K

output:

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]
ok: [server1]

TASK [Update apt cache and make sure Wget, Curl and Terminator are installed] ***
ok: [server2] => (item=wget)
ok: [server1] => (item=wget)
ok: [server2] => (item=curl)
ok: [server1] => (item=curl)
changed: [server2] => (item=terminator)
changed: [server1] => (item=terminator)

PLAY RECAP *********************************************************************
server1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
server2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

come si può notare i pacchetti wget e curl sono presenti su entrambi i server, mentre terminator viene invece installato.

 

Ansible su Debian 12 e SysLinuxOS

enjoy 😉

Installazione Nextcloud 27 su Raspberry pi 4

Installazione Nextcloud 27 su Raspberry pi 4

Installazione Nextcloud 27 su Raspberry pi 4

 

UPDATE: 14/10/23

con la versione bookworm, php8.2 è gia incluso.

Guida su come installare Nextcloud 27 su Raspberry pi 4. Per installare Nextcloud a differenza delle versioni precedenti è indispensabile avere php8.2

Nota: Assicurati di avere un Raspberry Pi 4 funzionante con Raspberry pi OS al momento bullseye.

1) Aggiornare il sistema:
$ sudo apt update
$ sudo apt upgrade -y
$ sudo reboot
2) Aggiungere i repository per ottenere le ultime versioni di php:
$ curl https://packages.sury.org/php/apt.gpg | sudo tee /usr/share/keyrings/suryphp-archive-keyring.gpg >/dev/null
$ echo "deb [signed-by=/usr/share/keyrings/suryphp-archive-keyring.gpg] https://packages.sury.org/php/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/sury-php.list
$ sudo apt update; sudo apt upgrade -y
3) Installazione di Apache2, Mysql, php8.2, certbot:
$ sudo apt install apache2 php8.2 php8.2-gd php8.2-sqlite3 php8.2-curl php8.2-zip php8.2-xml php8.2-mbstring php8.2-mysql php8.2-bz2 php8.2-intl php8.2-imap php8.2-gmp libapache2-mod-php8.2 mariadb-server python3-certbot-apache php8.2-fpm php8.2-apcu php8.2-imagick php8.2-bcmath libmagickcore-6.q16-6-extra ffmpeg redis-server php8.2-redis
4) Verifica Apache2
$ sudo a2enmod proxy_fcgi setenvif
$ sudo a2enconf php8.2-fpm
$ sudo systemctl restart apache2
$ sudo systemctl status apache2
5) Creazione utente nextcloud e db
$ sudo mysql -u root -p

ed invio

create database nextcloud_db;
create user nextclouduser@localhost identified by 'YOUR-STRONG-PWD';
grant all privileges on nextcloud_db.* to nextclouduser@localhost identified by 'YOU-STRONG-PWD';
flush privileges;
exit;
6) Download Nextcloud
$ cd /var/www/
$ sudo wget -v https://download.nextcloud.com/server/releases/latest.zip
$ sudo unzip latest.zip
$ sudo rm latest.zip

creazione  della directory di storage, quest’ultima può essere creata anche all’esterno della cartella Nextcloud o su unità esterna.

$ sudo mkdir -p /var/www/nextcloud/data
$ sudo chown -R www-data:www-data /var/www/nextcloud/
$ sudo chmod 750 /var/www/nextcloud/data
7) Configurazione Apache2
$ sudo nano /etc/apache2/sites-available/nextcloud.conf

ed incollare questa semplice configurazione, che punterà a [IPADDRESS]/nextcloud

Alias /nextcloud "/var/www/nextcloud/"

<Directory /var/www/nextcloud/>
Require all granted
AllowOverride All
Options FollowSymLinks MultiViews

<IfModule mod_dav.c>
Dav off
</IfModule>

</Directory>
$ sudo a2ensite nextcloud.conf
$ sudo systemctl reload apache2

se si volesse utilizzare invece un proprio dominio:

<VirtualHost *:80>
DocumentRoot /var/www/nextcloud/
ServerName your.domain.com

<Directory /var/www/nextcloud/>
Require all granted
AllowOverride All
Options FollowSymLinks MultiViews

<IfModule mod_dav.c>
Dav off
</IfModule>
</Directory>
</VirtualHost>
$ sudo a2ensite nextcloud.conf
$ sudo systemctl reload apache2
8) Nextcloud installazione e post installazione

 

Nextcloud Server su Raspberry Piandare all’indirizzo del server: [IPADDRESS]/nextcloud

$ hostname -I

creare username e password per l’accesso via web, e poi inserire nextclouduser e nextcloud_db. I dati di default saranno in /var/www/nextcloud/data, ma si possono anche spostare successivamente.

9) Problemi accesso al server Nextcloud

se non si accede più al server andare a verificare i 2 files principali di apache2, 000-default.conf e default-ssl.conf, che puntano alla directory sbagliata:

$ sudo nano /etc/apache2/sites-available/000-default.conf
$ sudo nano /etc/apache2/sites-available/default-ssl.conf

da

DocumentRoot /var/www/html

a

DocumentRoot /var/www/
10) Tuning Apache2

alcune essenziali modifiche:

$ sudo nano /etc/php/8.2/fpm/php.ini

trovare queste tre stringhe e modificarle da così:

memory_limit = 128M
post_max_size = 8M
upload_max_filesize = 2M

a così:

memory_limit = 1024M
post_max_size = 1024M
upload_max_filesize = 1024M
$ sudo systemctl restart apache2
$ sudo systemctl restart php8.2-fpm.service

Modificare anche le porte, come sotto:

$ sudo nano /etc/apache2/ports.conf
Listen 0.0.0.0:80

<IfModule ssl_module>
Listen 0.0.0.0:443
</IfModule>

<IfModule mod_gnutls.c>
Listen 443
</IfModule>
$ sudo systemctl restart apache2
11) Certificato SSL auto-firmato

nell’esempio sotto, userò un certificato auto-firmato, ma è consigliato un certificato del tipo  Let’s Encrypt (vedi capitolo 12)

$ sudo mkdir -p /etc/apache2/ssl
£ sudo openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
$ sudo a2enmod ssl
$ sudo systemctl restart apache2

inserire i dati per la creazione del certificato:

Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:

poi trovare queste due stringhe:

$ sudo nano /etc/apache2/sites-available/default-ssl.conf

e modificarle da così:

SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

a così:

SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertificateKeyFile /etc/apache2/ssl/apache.key
$ sudo a2ensite default-ssl.conf
$ sudo systemctl reload apache2
12) Installazione certificato con Let’s Encrypt
  • Il raspberry dovrà avere come dominio/hostname l’equivalente di example.com. Questo può essere modificato in /etc/hostname e poi riavviare.
  • Il vostro ip pubblico dovrà puntare quindi al dominio, nel caso non si avesse un ip pubblico, utilizzare un servizio di DNS.
  • Assicurarsi prima, di aprire la porte 80  verso il proprio server, altrimenti non si potranno ottenere i certificati. Successivamente bisognerà aprire la 443.

con certbot già installato in precedenza basta:

$ sudo certbot --webroot -w /var/www/nextcloud -i apache --agree-tos --redirect --hsts --staple-ocsp --email your@mail.com -d server.example.com

se tutto è andato bene vedremo:

IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2021-03-07. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

affinchè il server sia sempre raggiungibile a [IPADDRESS]/nextcloud  verificare che sia presente la stringa <Alias /nextcloud “/var/www/nextcloud/”>

$ sudo nano /etc/apache2/sites-available/nextcloud.conf

come sotto:

Alias /nextcloud "/var/www/nextcloud/"
<VirtualHost *:80>
DocumentRoot /var/www/nextcloud/
ServerName server.example.com
13) Sicurezza

13.1) Installazione e configurazione firewall ufw:

$ sudo apt install ufw
$ sudo ufw enable

bloccare tutte le connessioni in ingresso:

$ sudo ufw default allow outgoing
$ sudo ufw default deny incoming

aprire solo le porte interessate, come quella ssh o https, ad ogni modo mai utilizzare porte standard:

$ sudo ufw allow 22
$ sudo ufw allow 443/tcp
$ sudo ufw allow 80/tcp

verifica:

$ sudo ufw status verbose
$ sudo iptables -S

13.2) Installazione e configurazione fail2ban:

$ sudo apt install fail2ban
$ sudo systemctl enable fail2ban
$ sudo systemctl status fail2ban

proteggere tutti i servizi usati, utilizzando /etc/fail2ban/jail.local, fail2ban arriva già con dei filtri predefiniti che possono essere abilitati:

$ ls /etc/fail2ban/filter.d/

quindi:

$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
$ sudo nano /etc/fail2ban/jail.local

dove si trovano gia i filtri che si possono abilitare con “enabled = true

esempio:

[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 120
ignoreip = whitelist-IP

# detect password authentication failures
[apache]
enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6

# detect potential search for exploits and php vulnerabilities
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache*/*error.log
maxretry = 6

# detect Apache overflow attempts
[apache-overflows]
enabled = true
port = http,https
filter = apache-overflows
logpath = /var/log/apache*/*error.log
maxretry = 2

# detect failures to find a home directory on a server
[apache-nohome]
enabled = true
port = http,https
filter = apache-nohome
logpath = /var/log/apache*/*error.log
maxretry = 2
$ sudo systemctl restart fail2ban 
$ sudo fail2ban-client status

13.3) Integrazione Nextcloud in fail2ban:

$ sudo nano /etc/fail2ban/filter.d/nextcloud.conf

ed inserire il codice sotto:

[Definition]
_groupsre = (?:(?:,?\s*"\w+":(?:"[^"]+"|\w+))*)
failregex = ^\{%(_groupsre)s,?\s*"remoteAddr":"<HOST>"%(_groupsre)s,?\s*"messag>
^\{%(_groupsre)s,?\s*"remoteAddr":"<HOST>"%(_groupsre)s,?\s*"messag>
datepattern = ,?\s*"time"\s*:\s*"%%Y-%%m-%%d[T ]%%H:%%M:%%S(%%z)?"

a questo punto inserire nella sezione Jail il filtro specifico per Nextcloud:

$ sudo nano /etc/fail2ban/jail.local

ed inserire:

[nextcloud]
backend = auto
enabled = true
port = 80,443
protocol = tcp
filter = nextcloud
maxretry = 3
bantime = 86400
findtime = 43200
logpath = /var/www/nextcloud/data/nextcloud.log

verifica con:

$ sudo systemctl restart fail2ban
$ sudo fail2ban-client status nextcloud

errore avvio fail2ban:

$ sudo nano /etc/fail2ban/jail.local

ed impostare:

backend = systemd
14) Trusted Domain

indicare solo gli ip ed i domini che possono accedere:

$ sudo nano /var/www/nextcloud/config/config.php
'trusted_domains' => 
array (
0 => '192.168.1.122',
1 => 'my-domain.com',
15) Abilitare Memory Caching, Redis e default phone region
$ sudo nano /var/www/nextcloud/config/config.php

ed inserire prima della voce trusted_domains

'memcache.local' => '\OC\Memcache\APCu',

poi inserire dopo la voce installed’ => true

'memcache.local' => '\OC\Memcache\APCu',
'memcache.locking' => '\OC\Memcache\Redis',
'memcache.distributed' => '\OC\Memcache\Redis',
'redis' => [
'host' => 'localhost',
'port' => 6379,
],

e a seguire:

'default_phone_region' => 'IT',
16) Background Job

nextcloud richiede un cronjob per allegerire il carico, di default è settato in AJAX, andare nel menu Basic Settings ed impostare a cron:

Background Job

poi creare un crob job che parte in backgroung ogni 5 minuti:

$ sudo crontab -u www-data -e

ed inserire in fondo:

*/5 * * * * php8.2 --define apc.enable_cli=1 /var/www/nextcloud/cron.php

verificare con:

$ sudo crontab -u www-data -l
17) Forzare la connessione via SSL con certificato auto-firmato
$ sudo nano /etc/apache2/sites-available/000-default.conf

sostituire tutto con quello sotto, che reindirizza tutte le richieste a https:

Alias /nextcloud "/var/www/nextcloud/"

<VirtualHost *:80>
ServerAdmin admin@example

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,L]
</VirtualHost>
18) Aggiornamenti di sicurezza automatici giornalieri:
$ sudo apt install unattended-upgrades
$ sudo nano /etc/apt/apt.conf.d/02periodic

inserire:

APT::Periodic::Enable "1";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "1";
APT::Periodic::Verbose "2";

aggiornare:

$ sudo unattended-upgrades -d
19) Sudo DEVE sempre chiedere la password:
$ sudo nano /etc/sudoers.d/010_pi-nopasswd

diventa da così:

YOUR-USER ALL=(ALL) NOPASSWD: ALL

a così:

YOUR-USER ALL=(ALL) PASSWD: ALL
20) Mysql in sicurezza
$ sudo mysql_secure_installation
Enter current password for root (enter for none): Press Enter
Set root password? [Y/n] Y 
Remove anonymous users? [Y/n] Y 
Disallow root login remotely? [Y/n] Y 
Remove test database and access to it? [Y/n] Y 
Reload privilege tables now? [Y/n] Y
21) SSH in sicurezza

cambiare subito la porta di default e verificare che non sia abilitato il login come root, ma soprattutto usare una chiave ssh:

$ sudo nano /etc/ssh/sshd_config
#PermitRootLogin prohibit-password
Port 2223
22) Backup

backup da eseguire periodicamente su supporti esterni ed in remoto. Si può utilizzare rsync (guida), tar (guida), dd (guida), scp (guida), lsyncd (guida).

23) VPN

il sistema più sicuro per accedere da remoto è tramite vpn, se si ha un router che supporta ipSec o Wireguard (fritzbox 7590) il tutto è semplice, in alternativa si può creare un server vpn, utilizzando openvpn o wireguard

 

Installazione Nextcloud 27 su Raspberry pi 4

enjoy 😉

 

Monitorare Raspberry Pi con RPI-Monitor

 

Monitorare Raspberry Pi con RPI-Monitor

Monitorare Raspberry Pi con RPI-Monitor

 

Guida su come monitorare via Web UI un server Raspberry Pi utilizzando RPI-Monitor.

sudo apt update
sudo apt install dirmngr -y
sudo wget http://goo.gl/vewCLL -O /etc/apt/sources.list.d/rpimonitor.list
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com E4E362DE2C0D3C0F
sudo apt update
sudo apt install rpimonitor
sudo /etc/init.d/rpimonitor update

abilitare rpi-monitor allo startup come servizio:

sudo nano /etc/systemd/system/rpimonitor.service

ed inserire le righe sotto:

[Unit]
Description=RPi-Monitor daemon
Before=multi-user.target
After=remote-fs.target
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
Restart=on-failure
KillMode=mixed
Nice=19
ExecStartPre=/bin/sleep 10
ExecStart=/usr/bin/rpimonitord
ExecStop=/bin/kill $MAINPID
StandardOutput=append:/var/log/rpimonitor.log
StandardError=append:/var/log/rpimonitor.log

[Install]
WantedBy=multi-user.target

sudo systemctl daemon-reload
sudo systemctl enable rpimonitor

a questo punto si può accedere ad rpi-monitor all’indirizzo:

http://ip_address:8888

Monitorare Raspberry Pi con RPI-Monitor

enjoy 😉

 

Server DHCP su Debian 10 Buster

Server DHCP su Debian 10 Buster

Server DHCP su Debian 10 Buster. Installazione e configurazione di un server dhcp su Debian Buster:

# apt install isc-dhcp-server

settare la scheda di rete da utilzzare, nel mio caso eth0:

# nano /etc/default/isc-dhcp-server
# Defaults for isc-dhcp-server initscript
# sourced by /etc/init.d/isc-dhcp-server
# installed at /etc/default/isc-dhcp-server by the maintainer scripts

#
# This is a POSIX shell fragment
#

# Path to dhcpd's config file (default: /etc/dhcp/dhcpd.conf).
#DHCPD_CONF=/etc/dhcp/dhcpd.conf

# Path to dhcpd's PID file (default: /var/run/dhcpd.pid).
#DHCPD_PID=/var/run/dhcpd.pid

# Additional options to start dhcpd with.
#       Don't use options -cf or -pf here; use DHCPD_CONF/ DHCPD_PID instead
#OPTIONS=""

# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?
#       Separate multiple interfaces with spaces, e.g. "eth0 eth1".
INTERFACES="eth0"

poi modificare il file dhcpd.conf inserendo i parametri della nostra rete, nel mio caso 192.168.100.0/24:

# nano /etc/dhcp/dhcpd.conf
# Sample configuration file for ISC dhcpd for Debian
#
#

# The ddns-updates-style parameter controls whether or not the server will
# attempt to do a DNS update when a lease is confirmed. We default to the
# behavior of the version 2 packages ('none', since DHCP v2 didn't
# have support for DDNS.)
ddns-update-style none;

# option definitions common to all supported networks...
option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;
default-lease-time 600;
max-lease-time 7200;

# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
#authoritative;

# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;

# No service will be given on this subnet, but declaring it helps the
# DHCP server to understand the network topology.

#subnet 10.152.187.0 netmask 255.255.255.0 {
#}

# This is a very basic subnet declaration.
subnet 192.168.100.0 netmask 255.255.255.0 {
range 192.168.100.150 192.168.100.160;
option routers 192.168.100.1;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.100.254;
option domain-name-servers 192.168.100.1, 192.168.100.2;
option ntp-servers 192.168.100.1;
option netbios-name-servers 192.168.100.1;
option netbios-node-type 8;
}
#subnet 10.254.239.0 netmask 255.255.255.224 {
#  range 10.254.239.10 10.254.239.20;
#  option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;
#}

# This declaration allows BOOTP clients to get dynamic addresses,
# which we don't really recommend.

#subnet 10.254.239.32 netmask 255.255.255.224 {
#  range dynamic-bootp 10.254.239.40 10.254.239.60;
#  option broadcast-address 10.254.239.31;
#  option routers rtr-239-32-1.example.org;
#}

# A slightly different configuration for an internal subnet.
#subnet 10.5.5.0 netmask 255.255.255.224 {
#  range 10.5.5.26 10.5.5.30;
#  option domain-name-servers ns1.internal.example.org;
#  option domain-name "internal.example.org";
#  option routers 10.5.5.1;
#  option broadcast-address 10.5.5.31;
#  default-lease-time 600;
#  max-lease-time 7200;
#}

# Hosts which require special configuration options can be listed in
# host statements.   If no address is specified, the address will be
# allocated dynamically (if possible), but the host-specific information
# will still come from the host declaration.

#host passacaglia {
#  hardware ethernet 0:0:c0:5d:bd:95;
#  filename "vmunix.passacaglia";
#  server-name "toccata.fugue.com";
#}

# Fixed IP addresses can also be specified for hosts.   These addresses
# should not also be listed as being available for dynamic assignment.
# Hosts for which fixed IP addresses have been specified can boot using
# BOOTP or DHCP.   Hosts for which no fixed address is specified can only
# be booted with DHCP, unless there is an address range on the subnet
# to which a BOOTP client is connected which has the dynamic-bootp flag
# set.
#host fantasia {
#  hardware ethernet 08:00:07:26:c0:a5;
#  fixed-address fantasia.fugue.com;
#}

# You can declare a class of clients and then do address allocation
# based on that.   The example below shows a case where all clients
# in a certain class get addresses on the 10.17.224/24 subnet, and all
# other clients get addresses on the 10.0.29/24 subnet.

#class "foo" {
#  match if substring (option vendor-class-identifier, 0, 4) = "SUNW";
#}

#shared-network 224-29 {
#  subnet 10.17.224.0 netmask 255.255.255.0 {
#    option routers rtr-224.example.org;
#  }
#  subnet 10.0.29.0 netmask 255.255.255.0 {
#    option routers rtr-29.example.org;
#  }
#  pool {
#    allow members of "foo";
#    range 10.17.224.10 10.17.224.250;
#  }
#  pool {
#    deny members of "foo";
#    range 10.0.29.10 10.0.29.230;
#  }
#}

dopo le modifiche riavviare il servizio:

# systemctl restart isc-dhcp-server.service
Server DHCP su Debian Buster

enjoy ?