Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport doc changes #8542

Merged
merged 2 commits into from
Nov 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 14 additions & 46 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -757,31 +757,25 @@ You will also get this error if you try to access a repository that uses the arg
We recommend upgrading to the latest stable version and trying again. We are sorry. We should have thought abount forward
compatibility and implemented a more helpful error message.

Why does Borg extract hang after some time?
-------------------------------------------

When I do a ``borg extract``, after a while all activity stops, no cpu usage,
no downloads.

This may happen when the SSH connection is stuck on server side. You can
configure SSH on client side to prevent this by sending keep-alive requests,
for example in ~/.ssh/config:
Why am I seeing idle borg serve processes on the repo server?
-------------------------------------------------------------

::
Please see the next question.

Host borg.example.com
# Client kills connection after 3*30 seconds without server response:
ServerAliveInterval 30
ServerAliveCountMax 3
Why does Borg disconnect or hang when backing up to a remote server?
--------------------------------------------------------------------

You can also do the opposite and configure SSH on server side in
/etc/ssh/sshd_config, to make the server send keep-alive requests to the client:
Communication with the remote server (using an ssh: repo URL) happens via an SSH
connection. This can lead to some issues that would not occur during a local backup:

::
- Since Borg does not send data all the time, the connection may get closed, leading
to errors like "connection closed by remote".
- On the other hand, network issues may lead to a dysfunctional connection
that is only detected after some time by the server, leading to stale ``borg serve``
processes and locked repositories.

# Server kills connection after 3*30 seconds without client response:
ClientAliveInterval 30
ClientAliveCountMax 3
To fix such problems, please apply these :ref:`SSH settings <ssh_configuration>` so that
keep-alive requests are sent regularly.

How can I deal with my very unstable SSH connection?
----------------------------------------------------
Expand All @@ -797,32 +791,6 @@ could try to work around:
to do any more. Due to the way borg mount works, this might be less efficient
than borg extract for bigger volumes of data.

Why do I get "connection closed by remote" after a while?
---------------------------------------------------------

When doing a backup to a remote server (using a ssh: repo URL), it sometimes
stops after a while (some minutes, hours, ... - not immediately) with
"connection closed by remote" error message. Why?

That's a good question and we are trying to find a good answer in :issue:`636`.

Why am I seeing idle borg serve processes on the repo server?
-------------------------------------------------------------

Maybe the ssh connection between client and server broke down and that was not
yet noticed on the server. Try these settings:

::

# /etc/ssh/sshd_config on borg repo server - kill connection to client
# after ClientAliveCountMax * ClientAliveInterval seconds with no response
ClientAliveInterval 20
ClientAliveCountMax 3

If you have multiple borg create ... ; borg create ... commands in a already
serialized way in a single script, you need to give them ``--lock-wait N`` (with N
being a bit more than the time the server needs to terminate broken down
connections and release the lock).

.. _disable_archive_chunks:

Expand Down
39 changes: 17 additions & 22 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -311,36 +311,30 @@ Backup compression
------------------

The default is lz4 (very fast, but low compression ratio), but other methods are
supported for different situations.
supported for different situations. Compression not only helps you save disk space,
but will especially speed up remote backups since less data needs to be transferred.

You can use zstd for a wide range from high speed (and relatively low
compression) using N=1 to high compression (and lower speed) using N=22.

zstd is a modern compression algorithm and might be preferable over zlib and
lzma, except if you need compatibility to older borg versions (< 1.1.4) that
did not yet offer zstd.::
zstd is a modern compression algorithm which can be parametrized to anything between
N=1 for highest speed (and relatively low compression) to N=22 for highest compression
(and lower speed)::

$ borg create --compression zstd,N /path/to/repo::arch ~

Other options are:

If you have a fast repo storage and you want minimum CPU usage, no compression::
If you have a fast repo storage and you want minimum CPU usage you can disable
compression::

$ borg create --compression none /path/to/repo::arch ~

If you have a less fast repo storage and you want a bit more compression (N=0..9,
0 means no compression, 9 means high compression):

::
You can also use zlib and lzma instead of zstd, although zstd usually provides the
the best compression for a given resource consumption. You may want to use these
algorithms if you need compatibility to older borg versions (< 1.1.4) that
did not yet offer zstd. Please see :ref:`borg_compression` for all options.

$ borg create --compression zlib,N /path/to/repo::arch ~

If you have a very slow repo storage and you want high compression (N=0..9, 0 means
low compression, 9 means high compression):

::
An interesting alternative is ``auto``, which first checks with lz4 whether a chunk is
compressible (that check is very fast), and only if it is, compresses it with the
specified algorithm::

$ borg create --compression lzma,N /path/to/repo::arch ~
$ borg create --compression auto,zstd,7 /path/to/repo::arch ~

You'll need to experiment a bit to find the best compression for your use case.
Keep an eye on CPU load and throughput.
Expand Down Expand Up @@ -408,7 +402,8 @@ is installed on the remote host, in which case the following syntax is used::

$ borg init user@hostname:/path/to/repo

Note: please see the usage chapter for a full documentation of repo URLs.
Note: Please see the usage chapter for a full documentation of repo URLs. Also
see :ref:`ssh_configuration` for recommended settings to avoid disconnects and hangs.

Remote operations over SSH can be automated with SSH keys. You can restrict the
use of the SSH keypair by prepending a forced command to the SSH public key in
Expand Down
2 changes: 2 additions & 0 deletions docs/usage/serve.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@ locations like ``/etc/environment`` or in the forced command itself (example bel

Details about sshd usage: `sshd(8) <https://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8>`_

.. _ssh_configuration:

SSH Configuration
~~~~~~~~~~~~~~~~~
``borg serve``'s pipes (``stdin``/``stdout``/``stderr``) are connected to the ``sshd`` process on the server side. In the event that the SSH connection between ``borg serve`` and the client is disconnected or stuck abnormally (for example, due to a network outage), it can take a long time for ``sshd`` to notice the client is disconnected. In the meantime, ``sshd`` continues running, and as a result so does the ``borg serve`` process holding the lock on the repository. This can cause subsequent ``borg`` operations on the remote repository to fail with the error: ``Failed to create/acquire the lock``.
Expand Down
3 changes: 2 additions & 1 deletion src/borg/archiver.py
Original file line number Diff line number Diff line change
Expand Up @@ -2672,7 +2672,8 @@ def do_break_lock(self, args, repository):
The heuristic tries with lz4 whether the data is compressible.
For incompressible data, it will not use compression (uses "none").
For compressible data, it uses the given C[,L] compression - with C[,L]
being any valid compression specifier.
being any valid compression specifier. This can be helpful for media files
which often cannot be compressed much more.

obfuscate,SPEC,C[,L]
Use compressed-size obfuscation to make fingerprinting attacks based on
Expand Down
Loading