Having multiple users with directories set with non-readable files and directories is a problem when backing up data via ssh to or from a remote server. This is because the remote backup user cannot read all files and directories in the local system and the local root user cannot write to the sshfs mount or scp to the remote backup server – because I configure my servers to reject root login.
Therefore, it is necessary to use access control lists (ACL) to allow a remote backup user to have full read-only permission on SSH/NFS mounted filesystems that need to be backed up.
To accomplish this, after creating a local backup user, set read-only permissions on the desired directory:
sudo setfacl -m u:backup:r-X -R /data
-R applies the r-X attribute recursively to user “backup” in the local system.
To check the ACL, use:
In the remote backup system, create a backup user who can ssh into the system to be backed up.
Use sshfs to mount /data/ in the remote system. The ACL will allow the backup user to read all files and directories.
One caveat is that it is necessary to issue the setfacl command every time before backing up. If a user sets chmod 700 or chmod 600 in a file or directory, that directory or file will become unreadable by the backup user. So it’s important to set the read permission to the backup user every time to make sure everything is readable during backup.
I keep forgetting how to upload a directory recursively using the Linux sftp command line program.
The trick is to manually create the target directory in the sftp server then do put -r .
Example, to recursively upload the local directory “test” that contains subdirectories and files:
sftp> mkdir test sftp> put -r test
I have two webservers that have no GUI and are managed remotely via ssh.
In order to synchronize data across the two servers in “real time”, I opted for syncthing. However, syncthing configuration is done on a web browser GUI which isn’t installed in my servers.
The way I found to have access to the GUI of each server was to create an ssh tunnel and tunnel my local web browser traffic into the remote servers:
ssh -L 12341:localhost:8384 webserver1
ssh -L 12342:localhost:8384 webserver2
This commands will allow you to tunnel traffic from localhost port 12341 into webserver1 port 8384 (and vice-versa) and from localhost port 12342 into webserver2 port 8384 (and vice-versa). In other words, when you point your browser to localhost:12341 in your local computer, it will connect to webserver1:8384 and localhost:12342 will connect o webserver2:8384.
Let’s encrypt is a cool initiative that provides free SSL certificates to enable https everywhere.
To make the certificate installation process as painless as possible, they provide automated tools for many OSes.
Download their tool, run it on your Apache server and you have https without those scary warnings that we get with unpaid SSL certificates.
Their tool mostly works. Running certbot-auto on Ubuntu 14.04 gave me the error below:
Error in checking parameter list: AH00526: Syntax error on line 115 of /etc/apache2/sites-enabled/000-default-le-ssl.conf:
SSLCertificateFile: file ‘/etc/apache2/insert_cert_file_path’ does not exist or is empty
The solution is to enable the Apache SSL module before running certbot-auto.
a2enmod ssl ./certbot-auto
After doing this, everything worked as expected.
Use the command below to connect to an sftp server using ssh, no password and an RSA key as authentication method.
lftp -u sftpuser, sftp://myserver.com/ -e 'set sftp:connect-program "ssh -p [PORT] -i [path/id_rsa] " '
After I had to kill RStudio because the process it was running ate all the temporary space, RStudio would freeze when trying to open an R script.
In addition, when running a third party R function, I got this strange error saying it couldn’t change directory.
The fix was deleting this file: