NAME

borg-import-tar - Create a backup archive from a tarball

SYNOPSIS

borg [common options] import-tar [options] NAME TARFILE

DESCRIPTION

This command creates a backup archive from a tarball.
 
When giving '-' as path, Borg will read a tar stream from standard input.
 
By default (--tar-filter=auto) Borg will detect whether the file is compressed based on its file extension and pipe the file through an appropriate filter:
.tar.gz or .tgz: gzip -d
.tar.bz2 or .tbz: bzip2 -d
.tar.xz or .txz: xz -d
.tar.zstd or .tar.zst: zstd -d
.tar.lz4: lz4 -d

 
Alternatively, a --tar-filter program may be explicitly specified. It should read compressed data from stdin and output an uncompressed tar stream on stdout.
 
Most documentation of borg create applies. Note that this command does not support excluding files.
 
A --sparse option (as found in borg create) is not supported.
 
About tar formats and metadata conservation or loss, please see borg export-tar.
 
import-tar reads these tar formats:
BORG: borg specific (PAX-based)
PAX: POSIX.1-2001
GNU: GNU tar
POSIX.1-1988 (ustar)
UNIX V7 tar
SunOS tar with extended attributes

OPTIONS

See borg-common(1) for common options of Borg commands.

arguments

NAME
specify the archive name
TARFILE
input tar file. "-" to read from stdin instead.

options

--tar-filter
filter program to pipe data through
-s, --stats
print statistics for the created archive
--list
output verbose list of items (files, dirs, ...)
--filter STATUSCHARS
only display items with the given status characters
--json
output stats as JSON (implies --stats)

Archive options

--comment COMMENT
add a comment text to the archive
--timestamp TIMESTAMP
manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
-c SECONDS, --checkpoint-interval SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
--checkpoint-volume BYTES
write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
--chunker-params PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
-C COMPRESSION, --compression COMPRESSION
select compression algorithm, see the output of the "borg help compression" command for details.

EXAMPLES

 
# export as uncompressed tar
$ borg export-tar Monday Monday.tar
# import an uncompressed tar $ borg import-tar Monday Monday.tar
# exclude some file types, compress using gzip $ borg export-tar Monday Monday.tar.gz --exclude '*.so'
# use higher compression level with gzip $ borg export-tar --tar-filter="gzip -9" Monday Monday.tar.gz
# copy an archive from repoA to repoB $ borg -r repoA export-tar --tar-format=BORG archive - | borg -r repoB import-tar archive -
# export a tar, but instead of storing it on disk, upload it to remote site using curl $ borg export-tar Monday - | curl --data-binary @- https://somewhere/to/POST
# remote extraction via "tarpipe" $ borg export-tar Monday - | ssh somewhere "cd extracted; tar x"


Archives transfer script

Outputs a script that copies all archives from repo1 to repo2:
 
for A T in `borg list --format='{archive} {time:%Y-%m-%dT%H:%M:%S}{NL}'`
do
  echo "borg -r repo1 export-tar --tar-format=BORG $A - | borg -r repo2 import-tar --timestamp=$T $A -"
done


 
Kept:
archive name, archive timestamp
archive contents (all items with metadata and data)

 
Lost:
some archive metadata (like the original commandline, execution time, etc.)

 
Please note:
all data goes over that pipe, again and again for every archive
the pipe is dumb, there is no data or transfer time reduction there due to deduplication
maybe add compression
pipe over ssh for remote transfer
no special sparse file support

SEE ALSO

borg-common(1)

AUTHOR

The Borg Collective

Recommended readings

Pages related to borg-import-tar you should read also:

Questions & Answers

Helpful answers and articles about borg-import-tar you may found on these sites:
Stack Overflow Server Fault Super User Unix & Linux Ask Ubuntu Network Engineering DevOps Raspberry Pi Webmasters Google Search