I have an old Perl script that requests data from an SQLite database.
In one of my servers, calling DBI->connect(“dbi:SQLite:dbname=database.db”,””,””) resulted in Segmentation fault.
Intriguingly, another computer with the same Ubuntu 16.04.5 was able to run the script without causing a Segmentation fault.
After unsuccessfully installing Ubuntu’s Perl DBD:SQLite and other versions from CPAN, the solution was to install DBD:SQLite from CPAN in a user library with perl Makefile.PL PREFIX=/pathtolibrary/
1. Install postfix and configure as Internet site.
2. Enter Gmail details in 2 files: /etc/postfix/main.cf and /etc/postfix/sasl_passwd:
$ sudo postmap /etc/postfix/sasl/sasl_passwd
This should create /etc/postfix/sasl/sasl_passwd.db
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
# These two additional settings only used if using port 465
#smtp_tls_wrappermode = yes
#smtp_tls_security_level = encrypt
recipient_delimiter = +
mailbox_size_limit = 0
$ sudo systemctl restart postfix
The user’s full name who sends emails will show as the sender, so setting the full name to something like email@example.com can be helpful to quickly identify the origin of an automated email, in case you receive them from different servers.
To edit a user’s full name:
usermod -c "firstname.lastname@example.org" john
Having multiple users with directories set with non-readable files and directories is a problem when backing up data via ssh to or from a remote server. This is because the remote backup user cannot read all files and directories in the local system and the local root user cannot write to the sshfs mount or scp to the remote backup server – because I configure my servers to reject root login.
Therefore, it is necessary to use access control lists (ACL) to allow a remote backup user to have full read-only permission on SSH/NFS mounted filesystems that need to be backed up.
To accomplish this, after creating a local backup user, set read-only permissions on the desired directory:
sudo setfacl -m u:backup:r-X -R /data
-R applies the r-X attribute recursively to user “backup” in the local system.
To check the ACL, use:
In the remote backup system, create a backup user who can ssh into the system to be backed up.
Use sshfs to mount /data/ in the remote system. The ACL will allow the backup user to read all files and directories.
One caveat is that it is necessary to issue the setfacl command every time before backing up. If a user sets chmod 700 or chmod 600 in a file or directory, that directory or file will become unreadable by the backup user. So it’s important to set the read permission to the backup user every time to make sure everything is readable during backup.
I keep forgetting how to upload a directory recursively using the Linux sftp command line program.
The trick is to manually create the target directory in the sftp server then do put -r .
Example, to recursively upload the local directory “test” that contains subdirectories and files:
sftp> mkdir test
sftp> put -r test
I have two webservers that have no GUI and are managed remotely via ssh.
In order to synchronize data across the two servers in “real time”, I opted for syncthing. However, syncthing configuration is done on a web browser GUI which isn’t installed in my servers.
The way I found to have access to the GUI of each server was to create an ssh tunnel and tunnel my local web browser traffic into the remote servers:
ssh -L 12341:localhost:8384 webserver1
ssh -L 12342:localhost:8384 webserver2
This commands will allow you to tunnel traffic from localhost port 12341 into webserver1 port 8384 (and vice-versa) and from localhost port 12342 into webserver2 port 8384 (and vice-versa). In other words, when you point your browser to localhost:12341 in your local computer, it will connect to webserver1:8384 and localhost:12342 will connect o webserver2:8384.