Duplicity: Using keys cached in gpg-agent does not work with cron
On my system I do not want to store the correct passwords in the configuration files when using the duplicity backend. Instead I want to use keychain and gpg-agent to cache the sign key on system startup and do not even store the encryption key on the system. Options for duplicity would be options = --use-agent --gpg-options '--batch --no-tty'
With the current version of the handler this was not possible when running backupninja from a cron job (running it from a terminal there is not problem). The first call of duplicity under su -c
worked (gpg was able to get the key from the agent and use it). But any subsequent calls to gpg failed to find the keys. Adding lines using gpg-agent-connect in the handler I could verify that the key was still in the cache. But neither a direct call to gpg nor the one via duplicity was able to find it anymore. Either they would try to start pinentry to get the key or just fail (depending on the gpg configuration).
Sample error message:
Error: gpg: using pgp trust model
Error: gpg: using "80FXXXX" as default secret key for signing
Error: gpg: using subkey DD8D7XXXXXXXXXXX instead of primary key 2CF60XXXXXXXXXXX
Error: gpg: This key belongs to us
Error: gpg: writing to stdout
Error: gpg: RSA/AES256 encrypted for: "DD8D7012F15A5E34 Encrypt key Dr. Blatt <xxx>"
Error: gpg: Sorry, we are in batchmode - can't get input
Error: ===== End GnuPG log =====
It took a lot of trial and error to find out what caused this problem. There somehow seem to be side effect when using su -c
with no tty (due to cron) that result in no access to the running gpg-agent. I do not know what exactly the problem problem is. But this commit fixes the problem for me by using sh -c
instead of su -c
when spawning off duplicity.
With this change I can now use options = --use-agent --gpg-options '--batch --no-tty -v'
and wrong passwords (to prevent errors of the handler when checking) and the cached credentialsin gpg-agent are used for the duplicity backend.
I would have opened an MR with the fix but somehow was unable to. When trying to open one via email I was told it was not processed. Fix is in my fork