I wanted to update my lens correction database in Darktable and none of the instructions I found online worked.
I found out that for Canon, the only file that Darktables reads is /usr/share/lensfun/slr-canon.xml
So I concatenated the new database (mil-canon.xml) with slr-canon.xml and overwrote /usr/share/lensfun/slr-canon.xml.
cat slr-canon.xml.old /tmp/mil-canon.xml > /tmp/x.xml
sudo cp /tmp/x.xml /usr/share/lensfun/slr-canon.xml
Then I manually removed the XML tags between the 2 .xml <lensdatabase>
<!DOCTYPE lensdatabase SYSTEM “lensfun-database.dtd”>
I closed and opened Darktable and the new lenses were available for correction.
Darktable needs a more user-friendly way to use lensfun, not everybody is comfortable concatenating and editing files in the command line.
I have an old Perl script that requests data from an SQLite database.
In one of my servers, calling DBI->connect(“dbi:SQLite:dbname=database.db”,””,””) resulted in Segmentation fault.
Intriguingly, another computer with the same Ubuntu 16.04.5 was able to run the script without causing a Segmentation fault.
After unsuccessfully installing Ubuntu’s Perl DBD:SQLite and other versions from CPAN, the solution was to install DBD:SQLite from CPAN in a user library with perl Makefile.PL PREFIX=/pathtolibrary/
1. Install postfix and configure as Internet site.
2. Enter Gmail details in 2 files: /etc/postfix/main.cf and /etc/postfix/sasl_passwd:
$ sudo postmap /etc/postfix/sasl/sasl_passwd
This should create /etc/postfix/sasl/sasl_passwd.db
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
# These two additional settings only used if using port 465
#smtp_tls_wrappermode = yes
#smtp_tls_security_level = encrypt
recipient_delimiter = +
mailbox_size_limit = 0
$ sudo systemctl restart postfix
The user’s full name who sends emails will show as the sender, so setting the full name to something like firstname.lastname@example.org can be helpful to quickly identify the origin of an automated email, in case you receive them from different servers.
To edit a user’s full name:
usermod -c "email@example.com" john
Having multiple users with directories set with non-readable files and directories is a problem when backing up data via ssh to or from a remote server. This is because the remote backup user cannot read all files and directories in the local system and the local root user cannot write to the sshfs mount or scp to the remote backup server – because I configure my servers to reject root login.
Therefore, it is necessary to use access control lists (ACL) to allow a remote backup user to have full read-only permission on SSH/NFS mounted filesystems that need to be backed up.
To accomplish this, after creating a local backup user, set read-only permissions on the desired directory:
sudo setfacl -m u:backup:r-X -R /data
-R applies the r-X attribute recursively to user “backup” in the local system.
To check the ACL, use:
In the remote backup system, create a backup user who can ssh into the system to be backed up.
Use sshfs to mount /data/ in the remote system. The ACL will allow the backup user to read all files and directories.
One caveat is that it is necessary to issue the setfacl command every time before backing up. If a user sets chmod 700 or chmod 600 in a file or directory, that directory or file will become unreadable by the backup user. So it’s important to set the read permission to the backup user every time to make sure everything is readable during backup.
I keep forgetting how to upload a directory recursively using the Linux sftp command line program.
The trick is to manually create the target directory in the sftp server then do put -r .
Example, to recursively upload the local directory “test” that contains subdirectories and files:
sftp> mkdir test
sftp> put -r test