backupninja issueshttps://0xacab.org/liberate/backupninja/-/issues2021-12-22T10:22:42Zhttps://0xacab.org/liberate/backupninja/-/issues/11338PostgreSQL handler doesn't work on host installed on Debian 11 (Bullseye)2021-12-22T10:22:42ZGuillaume SubironPostgreSQL handler doesn't work on host installed on Debian 11 (Bullseye)On Debian 11 (Bullseye), postgres user is installed with nologin as shell
```
# grep postgres /etc/passwd
postgres:x:108:65534::/home/postgres:/usr/sbin/nologin
```
Which causes an error when the hander list databases :
```
# su - pos...On Debian 11 (Bullseye), postgres user is installed with nologin as shell
```
# grep postgres /etc/passwd
postgres:x:108:65534::/home/postgres:/usr/sbin/nologin
```
Which causes an error when the hander list databases :
```
# su - postgres -c 'psql -AtU postgres -c \"SELECT datname FROM pg_database WHERE NOT datistemplate\"'
This account is currently not available.
```
And, even worse, the handler continues and tries to dump databases "This", "Account", "is", "currently", etc.
```
# ls /var/backups/postgres/
account.pg_dump.gz currently.pg_dump.gz is.pg_dump.gz This.pg_dump.gz
available..pg_dump.gz globals.sql.gz not.pg_dump.gz
```
`chsh postgres -s /bin/bash` fixes the problem, but backupninja should work out of the box. I don't know yet what would be the best solution. Using `sudo -u` instead of `su -` works, but backupninja should not depends on sudo…https://0xacab.org/liberate/backupninja/-/issues/11336Impossible to create a `dup` backup action on Debian using ninjahelper2023-07-08T10:34:09ZphlummoxImpossible to create a `dup` backup action on Debian using ninjahelperOn Debian-based systems (I tested on Debian 11, "Bullseye", and Ubuntu 18.04, "Bionic"), it seems impossible to use `ninjahelper` to create a `dup` backup action, as trying to select "src" directories results in the error:
```
/usr/shar...On Debian-based systems (I tested on Debian 11, "Bullseye", and Ubuntu 18.04, "Bionic"), it seems impossible to use `ninjahelper` to create a `dup` backup action, as trying to select "src" directories results in the error:
```
/usr/share/backupninja/dup.helper: line 484: do_dup_src: command not found
```
## Version of backupninja used
1.2.1-1
(Installed from the Debian 11 repository.)
## Steps to reproduce
This can be reproduced by running Debian in a Docker container -- `docker run --rm -it debian:bullseye`.
Within the container:
1\. install `duplicity` and `backupninja`:
```
$ apt-get update && apt-get install -y --no-install-recommends duplicity backupninja
```
2\. Run `ninjahelper`:
```
$ ninjahelper
```
3\. Select "new / create a new backup action"
4\. Select "dup / incremental encrypted remote filesystem backup"
5\. Select "src / choose files to include and exclude"
## Expected behaviour
The user should be able to specify files to include and exclude.
## Actual behaviour
The error message
```
/usr/share/backupninja/dup.helper: line 484: do_dup_src: command not found
```
briefly appears at the bottom of the screen, and it's impossible to progress further through the `ninjahelper` "wizard".https://0xacab.org/liberate/backupninja/-/issues/11335backupninja failes with latest duplicity2022-03-05T18:47:05Zyovabackupninja failes with latest duplicityOption `--extra-clean` was removed in [0.8.11](https://gitlab.com/duplicity/duplicity/-/blob/master/CHANGELOG.md#rel0811-2020-02-24).Option `--extra-clean` was removed in [0.8.11](https://gitlab.com/duplicity/duplicity/-/blob/master/CHANGELOG.md#rel0811-2020-02-24).https://0xacab.org/liberate/backupninja/-/issues/11334rdiff verison2 not compatible2022-03-05T18:49:23ZDaniel Horstmannrdiff verison2 not compatibleHi,
after uprading Debian to version 11 (bullseye), rdiff was upgraded, to (from version 1 to 2),
its not possible anymore to use backupninja with rdiff Configs.
I guess the cli options changed, so backupninja can't handle that anymore...Hi,
after uprading Debian to version 11 (bullseye), rdiff was upgraded, to (from version 1 to 2),
its not possible anymore to use backupninja with rdiff Configs.
I guess the cli options changed, so backupninja can't handle that anymore.
Output of backupninja trying to backup:
```
Error: Fatal Error: Switches missing or wrong number of arguments
See the rdiff-backup manual page for more information.
Fatal Error: Truncated header string (problem probably originated remotely)
[...]
```
Does anyone else have this problem and maybe solved it already?
Thanks and Best regardshttps://0xacab.org/liberate/backupninja/-/issues/11324Using bwlimit and sshoptions results in duplicate --remote-schema2021-01-12T04:23:59ZJerome CharaouiUsing bwlimit and sshoptions results in duplicate --remote-schemahttps://0xacab.org/liberate/backupninja/-/issues/11305Duplicity: Using keys cached in gpg-agent does not work with cron2021-01-18T08:14:22ZMarkus BlattDuplicity: Using keys cached in gpg-agent does not work with cronOn my system I do not want to store the correct passwords in the configuration files when using the duplicity backend. Instead I want to use keychain and gpg-agent to cache the sign key on system startup and do not even store the encrypt...On my system I do not want to store the correct passwords in the configuration files when using the duplicity backend. Instead I want to use keychain and gpg-agent to cache the sign key on system startup and do not even store the encryption key on the system. Options for duplicity would be `options = --use-agent --gpg-options '--batch --no-tty'`
With the current version of the handler this was not possible when running backupninja from a cron job (running it from a terminal there is not problem). The first call of duplicity under `su -c` worked (gpg was able to get the key from the agent and use it). But any subsequent calls to gpg failed to find the keys. Adding lines using gpg-agent-connect in the handler I could verify that the key was still in the cache. But neither a direct call to gpg nor the one via duplicity was able to find it anymore. Either they would try to start pinentry to get the key or just fail (depending on the gpg configuration).
Sample error message:
```Error: ===== Begin GnuPG log =====
Error: gpg: using pgp trust model
Error: gpg: using "80FXXXX" as default secret key for signing
Error: gpg: using subkey DD8D7XXXXXXXXXXX instead of primary key 2CF60XXXXXXXXXXX
Error: gpg: This key belongs to us
Error: gpg: writing to stdout
Error: gpg: RSA/AES256 encrypted for: "DD8D7012F15A5E34 Encrypt key Dr. Blatt <xxx>"
Error: gpg: Sorry, we are in batchmode - can't get input
Error: ===== End GnuPG log =====
```
It took a lot of trial and error to find out what caused this problem. There somehow seem to be side effect when using `su -c` with no tty (due to cron) that result in no access to the running gpg-agent. I do not know what exactly the problem problem is. But this commit fixes the problem for me by using `sh -c` instead of `su -c` when spawning off duplicity.
With this change I can now use `options = --use-agent --gpg-options '--batch --no-tty -v'` and wrong passwords (to prevent errors of the handler when checking) and the cached credentialsin gpg-agent are used for the duplicity backend.
I would have opened an MR with the fix but somehow was unable to. When trying to open one via email I was told it was not processed. Fix is [in my fork](https://0xacab.org/blattms/backupninja/-/tree/make-duplicity-work-with-cached-gpg-keys)https://0xacab.org/liberate/backupninja/-/issues/11301maildir backup rotation is inconsitent when daily is greater than 7 or monthl...2021-01-07T13:59:44Zkienanmaildir backup rotation is inconsitent when daily is greater than 7 or monthly greater than 4Hi,
we ran into an issue where the rotation for backups is inconsitent. Perhaps though it may be differing expectations of the behaviour of the rotation.
We were using daily = 14 to keep two weeks of daily backups, followed by a series...Hi,
we ran into an issue where the rotation for backups is inconsitent. Perhaps though it may be differing expectations of the behaviour of the rotation.
We were using daily = 14 to keep two weeks of daily backups, followed by a series of weekly and monthly backups. We discovered that the dates of the weekly backups are inconsistent: for example, on the 18 of june the weeklies were the 3,2,1 of june and the 30 of april.
After tracing through the issue, what I believe to be happening is in this rotation code in the maildir handler. If keepdaily is 14, the max is 15. If daily.15 exists and there's not weekly.1 (eg. the weekly backup has been rotated), daily.15 is rotated in weekly.1. My normal expectation is that the weekly is checking pointed each week (7 days) instead of rotating a period that's two weeks old as if it were the last week.
The same logic is applied to the weekly to monthly roation, where if more than 4 weeks of weeklies are kept: the month.1 will be incorrectly offset.
```
max=\$((keepdaily+1))
if [ \( \$keepweekly -gt 0 -a -d $backuproot/daily.\$max \) -a ! -d $backuproot/weekly.1 ]; then
echo "Debug: daily.\$max --> weekly.1"
mv $backuproot/daily.\$max $backuproot/weekly.1
date +%c%n%s > $backuproot/weekly.1/rotated
fi
max=\$((keepweekly+1))
if [ \( \$keepmonthly -gt 0 -a -d $backuproot/weekly.\$max \) -a ! -d $backuproot/monthly.1 ]; then
echo "Debug: weekly.\$max --> monthly.1"
mv $backuproot/weekly.\$max $backuproot/monthly.1
date +%c%n%s > $backuproot/monthly.1/rotated
fi
```https://0xacab.org/liberate/backupninja/-/issues/11291mysqlhotcopy fails when "databases = all" (deprecate method?)2021-01-13T19:23:16ZJerome Charaouimysqlhotcopy fails when "databases = all" (deprecate method?)The failure is caused by mysqlhotcopy trying and failing to hot-copy the `performance_schema` table:
Debug: su root -c "/usr/bin/mysqlhotcopy --quiet --allowold --regexp /.*/./.*/ /var/backups/mysql/hotcopy"
Warning: DBD::mysql:...The failure is caused by mysqlhotcopy trying and failing to hot-copy the `performance_schema` table:
Debug: su root -c "/usr/bin/mysqlhotcopy --quiet --allowold --regexp /.*/./.*/ /var/backups/mysql/hotcopy"
Warning: DBD::mysql::db do failed: command denied to user 'root'@'localhost' for table 'accounts' at /usr/bin/mysqlhotcopy line 523.
Warning: Failed to hotcopy all mysql databases
This has been reported in DEBBUG-735014 as well as [upstream](https://bugs.mysql.com/bug.php?id=66589) (six years ago). Upstream eventually responded that `mysqlhotcopy` is deprecated and not distributed anymore with MySQL 5.7.
I think with regards to this, the fact that `mysqlhotcopy` only supports then MyISAM or ARCHIVE engines, and the InnoDB engine being the default nowadays, I suggest we remove support for mysqlhotcopy from the handler and replace it with support for XtraBackup or [MariaDB Backup](https://mariadb.com/kb/en/library/mariadb-backup-overview/) (the latter being a more featureful fork of the former). Both are packaged in Debian.https://0xacab.org/liberate/backupninja/-/issues/11290Database handlers produce su-related errors in logs2021-01-13T19:25:23ZJerome CharaouiDatabase handlers produce su-related errors in logsThis was initially reported in DEBBUG-879664 against the pgsql handler on the Debian BTS, but I'm also seeing the same problem with the mysql one.
Basically, when backing up multiple smallish databases, `su` is called many times in quic...This was initially reported in DEBBUG-879664 against the pgsql handler on the Debian BTS, but I'm also seeing the same problem with the mysql one.
Basically, when backing up multiple smallish databases, `su` is called many times in quick succession, leading to messages like this appearing in system logs:
```
2017-10-23T01:00:15.795982+02:00 fer systemd[1]: user@500.service: Start request repeated too quickly.
2017-10-23T01:00:15.796132+02:00 fer systemd[1]: Failed to start User Manager for UID 500.
2017-10-23T01:00:15.796271+02:00 fer systemd[1]: user@500.service: Unit entered failed state.
2017-10-23T01:00:15.796409+02:00 fer systemd[1]: user@500.service: Failed with result 'start-limit-hit'.
2017-10-23T01:00:15.796702+02:00 fer su[11455]: pam_systemd(su:session): Failed to create session: Start job for unit user@500.service failed with 'failed'
2017-10-23T01:00:21.594947+02:00 fer systemd[1]: user@500.service: Start request repeated too quickly.
2017-10-23T01:00:21.595070+02:00 fer systemd[1]: Failed to start User Manager for UID 500.
2017-10-23T01:00:21.595212+02:00 fer systemd[1]: user@500.service: Failed with result 'start-limit-hit'.
2017-10-23T01:00:21.596172+02:00 fer su[11486]: pam_systemd(su:session): Failed to create session: Start job for unit user@500.service failed with 'failed'
```https://0xacab.org/liberate/backupninja/-/issues/11263mysql handler using defaults-extra-file lets /root/.my.cnf override credentials2018-09-18T23:11:10ZLeLutinmysql handler using defaults-extra-file lets /root/.my.cnf override credentialsIn a really weird situation today where root didn't have any special permissions on mysql (I believe it was intentional, although really weird) the mysql handler was not able to perform dumps of databases other than those owned by the ro...In a really weird situation today where root didn't have any special permissions on mysql (I believe it was intentional, although really weird) the mysql handler was not able to perform dumps of databases other than those owned by the root user.
This was happening because `--defaults-extra-file` is evaluated before `/root/.my.cnf` and so the file in root's homedir was overriding credentials in `/etc/mysql/debian.cnf`.
I'd argue for the handler to switch to using `--defaults-file` instead of `--defaults-extra-file` which doesn't suffer from the same overriding potential, although I don't know if there was a reason in the first place to use the argument with the -extra variant.Jerome CharaouiJerome Charaouihttps://0xacab.org/liberate/backupninja/-/issues/3637Warn about empty dumpfiles / huge table issue2020-08-14T05:44:07ZrhattoWarn about empty dumpfiles / huge table issueSome months ago I had to restore a database from a gziped mysql dump. To my surprise, the dump was an empty file!
Luckily, I had also a mysql hotbackup so I had the opportunity inspect what went wrong with the dump. It happened that I h...Some months ago I had to restore a database from a gziped mysql dump. To my surprise, the dump was an empty file!
Luckily, I had also a mysql hotbackup so I had the opportunity inspect what went wrong with the dump. It happened that I had a huge table (~1.4GB) full of junk that somehow prevented mysqldump do the job.
At the time I was very busy to find what exactly was making mysqldump die: if it was a timeout, a memory issue, buffer size, etc. Neither I still have the original dump to test.
The first thing that passed in my mind was: how backupninja didn't told me that it had an empty dumpfile? Looking at the handler code, I just saw an exit status check:
<pre>
output=`su $user -s /bin/bash -c "set -o pipefail ; $execstr" 2>&1`
code=$?
if [ "$code" == "0" ]
then
debug $output
info "Successfully finished dump of mysql database $db"
else
warning $output
warning "Failed to dump mysql databases $db"
fi
</pre>
Woudn't an additional size check also be needed? The handler could throw a warning in case of a zero-sized dump:
<pre>
if [ ! -s "$dumpdir/${db}.sql" ]; then
warning "Dump of mysql database $db has zero size"
fi
</pre>
*(from redmine: created on 2011-11-17)*Guillaume SubironGuillaume Subironhttps://0xacab.org/liberate/backupninja/-/issues/10172bn tries emailing over-sized logs2020-08-14T05:50:38ZGhost Userbn tries emailing over-sized logsIf rdiff-backup verbosity is set to -v6 in a backupninja configuration, this sometimes creates very large logs if rdiff-backup fails. One time this happened and backupninja tried sending an email with the entire log, but could not becaus...If rdiff-backup verbosity is set to -v6 in a backupninja configuration, this sometimes creates very large logs if rdiff-backup fails. One time this happened and backupninja tried sending an email with the entire log, but could not because the system ran out of memory. (It has 16 GB of ram.) I think the memory usage was directly related to log size.
Another time, backupninja successfully created the email, but it did not pass through the mail queue because the message size was about 50 MB.
If anyone else is having this issue, one workaround is to not use the -v6 option with rdiff-backup. : )
It would be nice if backupninja truncated logs to a reasonable size. Since the most relevant errors are bound to occur at the end of the email, running a bytewise "tail" on the message should be sufficient. I figure that 10 KB is more than plenty of log in an email when the user can always check the log file in /var/log/.
*(from redmine: created on 2015-09-08)*https://0xacab.org/liberate/backupninja/-/issues/11258getconf is 1) bash-only, 2) buggy2021-06-27T21:49:22ZGhost Usergetconf is 1) bash-only, 2) buggy1) bash-only
More specifically:
<pre>
ret="${ret//\\*/__star__}"
</pre>
(commented: "replace * with %, so that it is not globbed")
Either forget about /bin/sh and allow bash-ism everywhere (IMHO the best option)
Either fix it to make ...1) bash-only
More specifically:
<pre>
ret="${ret//\\*/__star__}"
</pre>
(commented: "replace * with %, so that it is not globbed")
Either forget about /bin/sh and allow bash-ism everywhere (IMHO the best option)
Either fix it to make it compatible with /bin/sh
2) bugged :
\" (as an example) get replaced with __star__
Because quoted "${ret//\*/__star__}" replace the "\*" rather than simply "*"
<pre>
ret='*'; echo "${ret//\\*/__star__}" # output * which is not stripped
ret='\*'; echo "${ret//\\*/__star__}" # outputs __start__ because of the \ prefix
ret='\'; echo "${ret//\\*/__star__}" # outputs __start__ because it applies to \
ret='\abcd'; echo "${ret//\\*/__star__}" # outputs __start__ because it applies up to EO
</pre>
The expression, if it should ever exist, should be
<pre>
${ret//\*/__star__}
</pre>
(escaping with one \ the *)
*(from redmine: created on 2016-03-18)*