Commit graph

262 commits

Author SHA1 Message Date
morganamilo
83838214b7 Remove unused variables / assignments 2024-02-09 11:24:20 +10:00
morganamilo
bf76b5e89f libalpm: correctly log curl_download_internal return value
Signed-off-by: Allan McRae <allan@archlinux.org>
2024-02-04 10:23:34 +10:00
Andrew Gregory
3aa1975c1d alpm: add cache server support
Cache servers differ from regular servers in that they do not produce
warnings and are not removed from the server pool for "soft errors"
(i.e. the server was reachable, but the download failed) and they are
not used for databases.  If a host is used for both a cache server and a
regular server, it may still be removed from the server pool for soft
errors that occur when used as cache server and removal from the server
pool for soft errors will not affect future attempted use as a cache
server.

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2023-12-02 04:56:25 +00:00
Andrew Gregory
56626816b6 dload: differentiate between hard and soft errors
Set error count to -1 to indicate a hard error to allow them to be
treated differently.

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2023-12-02 04:56:25 +00:00
Allan McRae
471a030466 Avoid NULL deference in curl_check_finished_download
We have not set handle in the function at this stage, so we can not
assign an error to it.  Pass the handle to the function to avoid
waiting until the payload is retrieved.

Signed-off-by: Allan McRae <allan@archlinux.org>
2022-12-13 10:00:13 +10:00
Chris Down
015eb31c3a dload: Remove unused ABORT_SIGINT
The last user of ABORT_SIGINT was removed in commit 84723cab5d
("Cleanup the old sequential download code"), and this isn't exported as
part of the public API.

Signed-off-by: Chris Down <chris@chrisdown.name>
Signed-off-by: Allan McRae <allan@archlinux.org>
2022-07-21 20:00:44 +10:00
Allan McRae
40583ebe89 Avoid information leakage with badly formed download header
Parsing of Content-Disposition relies on well formed headers.
A malformed header such as:

Content-Disposition="";

will result in a strnduppayload->content_disp_name, -1, ptr),
which will copy memory until it hits a \0.

Prevent this by only copying the value if it exists.

Fixes FS#73704.

Signed-off-by: Allan McRae <allan@archlinux.org>
2022-03-06 21:49:56 +10:00
Allan McRae
90df85e9cf Update copyright years
./build-aux/update-copyright 2021 2022

Signed-off-by: Allan McRae <allan@archlinux.org>
2022-01-02 13:34:52 +10:00
Allan McRae
5352367022 Prevent translation of curl
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-11-20 12:39:42 -08:00
morganamilo
b0a2fd75b2 Update mailing list url
change pacman-dev@archlinux.org to pacmandev@lists.archlinux.org

Most of this is copyright notices but this also fixes FS#72129 by
updating the address in docs/index.asciidoc.
2021-11-20 12:38:25 -08:00
Vladimir Panteleev
b242f5f24c libalpm: Log URLs when retrying
Allow finding which mirror was used to fetch a file.

This makes it a bit easier to debug situations in which mirrors serve
bad files with HTTP 200.

Signed-off-by: Vladimir Panteleev <archlinux@cy.md>
2021-11-20 12:36:29 -08:00
Charlie Sale
efb714b31c Order downloads by descending max_size
When downloading in parallel, sort by package size so that the larger
packages are queued first to fully leverage parallelism.
Addresses FS#70172

Signed-off-by: Charlie Sale <softwaresale01@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-09-04 10:34:00 +10:00
morganamilo
2ec6de96a6 only use effective url for urls containing .db or .pkg
Github and other sites redirect their downloads to a cdn. So the
download http://foo.org/myrepo.db may redirect to something like
https://cdn.foo.org/83749327439.

This then causes pacman to try and download the sig as
https://cdn.foo.org/83749327439.sig which is incorrect. In this case
pacman should append .sig to the original url.

However urls like https://archlinux.org/packages/community/x86_64/0ad/download/
Redirect to the mirror, so .sig has to appended after the redirects and
not before.

So we decide if we should append .sig on the original or effective url
based on if the effective url (minus the query part) has .db or .pkg in it.

Fixes FS#71148

---

v2: move variable decleration to start of block
v3: use dbext instead of db
2021-09-04 10:34:00 +10:00
morganamilo
c0026caab0 libalpm: Give -U downloads a random .part name if needed
archweb's download links all ended in /download. This cause all the temp
files to be named download.part. With parallel downloads this results in
multiple downloads to go to the same temp file and breaks the transaction.

Assign random temporary filenames to downloads from URLs that are either
missing a filename, or if the filename does not contain at least three
hyphens (as a well formed package filename does).

While this approach to determining when to use a temporary filename is
not 100% foolproof, it does keep nice looking download progress bar names
when a proper package filename is given. The only downside of not using
temporary files when provided with a filename  with three or more hyphens
is URLs created specifically to bypass temporary filename usage can not
be downloaded in parallel. We probably do not want to download packages
from such URLs anyway.

Fixes FS#71464

Modified-by: Allan McRae (do not use temporary files for realish URLs)
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-09-04 10:33:51 +10:00
morganamilo
0147de169a libalpm: always name signatures after original file
If the original download redirects to to a different url then alpm would
try to name the sig file after the url instead of <original_file>.sig.
Instead force this naming scheme regardless of url.

Fixes FS#71274

Signed-off-by: Allan McRae <allan@archlinux.org>
2021-06-24 23:51:11 +10:00
morganamilo
238109760d libalpm: call retry events for sig downlods
Around the same time retry events were added, there was a patch to pass
sig download events to the frontend. The retry code was not updated to
account for this.

Signed-off-by: morganamilo <morganamilo@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-06-07 14:14:19 +10:00
Allan McRae
3401f9e142 libalpm: prevent download error pages ending up in package files
Some servers respond with error pages (e.g. 404.html) when a package is
not present. These were getting written to packages before moving onto
the next server. Reset the download progress on 400+ error conditions
to avoid this.

Signed-off-by: Allan McRae <allan@archlinux.org>
2021-06-03 09:30:10 +10:00
morganamilo
8bf17b29a2 libalpm: fix download rates becoming negative
When a download fails on one mirror a new download is started on the
next mirror. This causes the ammount downloaded to reset, confusing the
rate math and making it display a negative rate.

This is further complicated by the fact that a download may be resumed
from where it is or started over.

To account for this we alert the frontend that the download was
restarted. Pacman then starts the progress bar over.

Signed-off-by: Allan McRae <allan@archlinux.org>
2021-05-09 23:28:04 +10:00
Andrew Gregory
72238aa046 call download progress callback for signatures
pacman may not care about them, but other front-ends do.

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2021-05-01 12:08:14 +10:00
Andrew Gregory
eb1a63a516 alpm_db_update: indicate if dbs were up to date
Restore the prior indicator whether or not databases were up to date.
0 is used to indicate if *any* db was actually updated as callers are
more likely to care about that than if *all* dbs were updated.

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2021-05-01 12:08:14 +10:00
Andrew Gregory
0ff94ae85d fix downloading multiple urls with XferCommand
An extra break causes _alpm_download to break out of the payload loop as
soon as it sees a successful url download with XferCommand.

Fixes: FS#70608 - -U fails to download all files with XferCommand

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2021-05-01 12:08:14 +10:00
Andrew Gregory
e7fa35baa2 add front-end provided context to callbacks
Our callbacks require front-ends to maintain state in order to provide
reasonable output.  The new download callback in particular requires
much more complex state information to be saved.  Without the ability to
provide context, state must be saved globally, which may not be possible
for all front-ends.  Scripting language bindings in particular have no
way to register per-handle callbacks without some form of context.

Implements: FS#12721

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
2021-05-01 12:08:14 +10:00
Andrew Gregory
9060058393 include retries and signatures in download count
Fixes: FS#69881

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-04-07 22:34:29 +10:00
Andrew Gregory
4bf7aa119d skip servers with too many errors
Keep track of errors from servers so that bad ones can be skipped once
a threshold is reached.  Key the error tracking off the hostname because
hosts may serve multiple repos under different url's and errors are
likely to be host-wide.

Implements: FS#29293.

Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-04-07 22:33:52 +10:00
Anatol Pomozov
1e60a5f006 Remove "total download" callback in favor of generic event callback
Total download callback called right before packages start downloaded.
But we already have an event for such event (ALPM_EVENT_PKG_RETRIEVE_START)
and it is naturally to use the event to pass information about expected
download size.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-03-25 11:39:03 +10:00
Allan McRae
17f9911ffc Update copyright year
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-03-01 12:22:20 +10:00
morganamilo
f5b373788f libalpm: don't use curl's deprecated functions
This bumps the minimun curl version from 7.32.0 to 7.55.0.

Signed-off-by: Allan McRae <allan@archlinux.org>
2021-01-09 00:12:19 +10:00
morganamilo
7cc8e0181f libalpm: remove useless if
Signed-off-by: Allan McRae <allan@archlinux.org>
2021-01-09 00:11:44 +10:00
morganamilo
4b8c274f7f libalpm: don't call dlcb when not set
Fixes FS#68728:

Signed-off-by: Allan McRae <allan@archlinux.org>
2020-11-26 16:31:27 +10:00
Anatol Pomozov
14c0e53eed Check that destfile_name exists before using it
In some cases (when trust_remote_name is used for a URL without a filename and
no Content-Disposition is provided by the server) destfile_name will be
NULL. In this case payload data will be stored in tempfile_name and no
destfile_name is set.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-07-14 23:43:10 +10:00
Anatol Pomozov
1fd95939db Do not free payload fields in the middle of this structure use
At the end of payload use it calls _alpm_dload_payload_reset()
that will free() these and other fields anyway.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-07-14 23:41:45 +10:00
Anatol Pomozov
a8bdc2e10a Build signature remote name based on the main payload name
The main payload final name might be affected by url redirects or
Content-Disposition HTTP header value.

We want to make sure that accompanion *.sig filename always matches the
package filename. So ignore finalname/Content-Disposition for the *.sig file.

It also helps to fix a corner case when the download URL does not contain
a filename and server provides Content-Disposition for the main payload
but not for the signature payload.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-07-14 23:39:37 +10:00
Anatol Pomozov
f3dfba73d2 FS#33992: force download *.sig file if it does not exist in the cache
In case if *.pkg exists but *.sig file does not we still have to pass
the pkg to multi_download API.

To avoid redownloading *.pkg file we use CURLOPT_TIMECONDITION curl option.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-07-07 21:38:00 +10:00
Anatol Pomozov
f078c2d3bc Move signature payload creation to download engine
Until now callee of ALPM download functionality has been in charge of
payload creation both for the main file (e.g. *.pkg) and for the accompanied
*.sig file. One advantage of such solution is that all payloads are
independent and can be fetched in parallel thus exploiting the maximum
level of download parallelism.

To build *.sig file url we've been using a simple string concatenation:
$requested_url + ".sig". Unfortunately there are cases when it does not
work. For example an archlinux.org "Download From Mirror" link looks like
this https://www.archlinux.org/packages/core/x86_64/bash/download/ and
it gets redirected to some mirror. But if we append ".sig" to the end of
the link url and try to download it then archlinux.org returns 404 error.

To overcome this issue we need to follow redirects for the main payload
first, find the final url and only then append '.sig' suffix.
This implies 2 things:
 - the signature payload initialization need to be moved to dload.c
 as it is the place where we have access to the resolved url
 - *.sig is downloaded serially with the main payload and this reduces
 level of parallelism

Move *.sig payload creation to dload.c. Once the main payload is fetched
successfully we check if the callee asked to download the accompanied
signature. If yes - create a new payload and add it to mcurl.

*.sig payload does not use server list of the main payload and thus does
not support mirror failover. *.sig file comes from the same server as
the main payload.

Refactor event loop in curl_multi_download_internal() a bit. Instead of
relying on curl_multi_check_finished_download() to return number of new
payloads we simply rerun the loop iteration one more time to check if
there are any active downloads left.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-07-07 21:35:35 +10:00
Anatol Pomozov
84723cab5d Cleanup the old sequential download code
All users of _alpm_download() have been refactored to the new API.
It is time to remove the old _alpm_download() functionality now.

This change also removes obsolete SIGPIPE signal handler functionality
(this is a leftover from libfetch days).

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
2020-06-26 15:59:16 +10:00
Anatol Pomozov
16d98d6577 Convert '-U pkg1 pkg2' codepath to parallel download
Installing remote packages using its URL is an interesting case for ALPM
API. Unlike package sync ('pacman -S pkg1 pkg2') '-U' does not deal with
server mirror list. Thus _alpm_multi_download() should be able to
handle file download for payloads that either have 'fileurl' field
or pair of fields ('servers' and 'filepath') set.

Signature for alpm_fetch_pkgurl() has changed and it accepts an
output list that is populated with filepaths to fetched packages.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
2020-06-26 15:59:08 +10:00
Anatol Pomozov
0346e0eef2 Convert download packages logic to multiplexed API
Create a list of dload_payloads and pass it to the new _alpm_multi_*
interface.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:39 +10:00
Anatol Pomozov
b96e0df4dc Implement multibar UI
Multiplexed download requires ability to draw UI for multiple active progress
bars. To implement it we use ANSI codes to move cursor up/down and then
redraw the required progress bar.
`pacman_multibar_ui.active_downloads` field represents the list of active
downloads that correspond to progress bars.
`struct pacman_progress_bar` is a data structure for a progress bar.

In some cases (e.g. database downloads) we want to keep progress bars in order.
In some other cases (package downloads) we want to move completed items to the
top of the screen. Function `multibar_move_completed_up` allows to configure
such behavior.

Per discussion in the maillist we do not want to show download progress for
signature files.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:39 +10:00
Anatol Pomozov
c78eb48d91 Extend download callback interface with start/complete events
With the previous download interface the callback uses the first progress
event as 'download has started' signal. Unfortunately it does not work with
up-to-date files that never receive 'download progress' events.
Up-to-date database messages are currently handled in sync_syncdbs()
after the sequential download is completed and a result from ALPM is
received. But this is not going to work with multiplexed download
interface that returns the result only after all files are completed.

Another problem with 'first progress event is the beginning of the
download' is that such events time are unpredictable. Thus the UI progress
bar order might differ from what has been passed by client to
alpm_dbs_update() function. We actually want to keep the dbs progress bars
in a strict order.

To help to solve the given problems extend the download callback to
allow 2 more events - download started and completed. 'Download started'
events appear in the same order as in the list given by a client.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Anatol Pomozov
6a331af27f Implement multiplexed download using mCURL
curl_multi_download_internal() is the main loop that creates up to
'ParallelDownloads' easy curl handles, adds them to mcurl and then
performs curl execution. This is when the paralled downloads happens.
Once any of the downloads complete the function checks its result.
In case if the download fails it initiates retry with the next server
from payload->servers list. At the download completion all the payload
resources are cleaned up.

curl_multi_check_finished_download() is essentially refactored version of
curl_download_internal() adopted for multi_curl. Once mcurl porting is
complete curl_download_internal() will be removed.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Anatol Pomozov
1d42a8f954 Implement _alpm_multi_download
It is an equivalent of _alpm_download but accepts a list of payloads.

curl_multi_download_internal() is a stub at this moment and will be
implemented in the later commits of this patch series.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Anatol Pomozov
fa68c33fa8 Inline dload_payload->curlerr field into a local variable
dload_payload->curlerr is a field that is used inside
curl_download_internal() function only. It can be converted to a local
variable.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Anatol Pomozov
dc98d0ea09 Add multi_curl handle to ALPM global context
To be able to run multiple download in parallel efficiently we need to
use curl_multi interface [1]. It introduces a set of APIs over new type
of handler 'CURLM'.

Create CURLM object at the application start and set it to global ALPM
context.

The 'single-download' CURL handle moves to payload struct. A new CURL
handle is created for each payload with intention to be processed by CURLM.

Note that curl_download_internal() is not ported to CURLM interface due
to the fact that the function will go away soon.

[1] https://curl.haxx.se/libcurl/c/libcurl-multi.html

Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Anatol Pomozov
a8a1a1bb3e Introduce alpm_dbs_update() function for parallel db updates
This is an equivalent of alpm_db_update but for multiplexed (parallel)
download. The difference is that this function accepts list of
databases to update. And then ALPM internals download it in parallel if
possible.

Add a stub for _alpm_multi_download the function that will do parallel
payloads downloads in the future.

Introduce dload_payload->filepath field that contains url path to the
file we download. It is like fileurl field but does not contain
protocol/server part. The rationale for having this field is that with
the curl multidownload the server retry logic is going to move to a curl
callback. And the callback needs to be able to reconstruct the 'next'
fileurl. One will be able to do it by getting the next server url from
'servers' list and then concat with filepath. Once the 'parallel download'
refactoring is over 'fileurl' field will go away.

Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-05-09 11:58:21 +10:00
Allan McRae
6ba250e400 Use GOTO_ERR throughout
The GOTO_ERR define was added in commit 80ae8014 for use in future commits.
There are plenty of places in the code base it can be used, so convert them.

Signed-off-by: Allan McRae <allan@archlinux.org>
2020-04-13 23:44:46 +10:00
Allan McRae
e76ec94083 build-aux/update-copyright 2019 2020
Signed-off-by: Allan McRae <allan@archlinux.org>
2020-02-10 10:46:03 +10:00
morganamilo
d0c487d4dc Docs docs docs
libalpm: move docs from .c files into alpm.h And fix/expand some
along the way.

Signed-off-by: Allan McRae <allan@archlinux.org>
2020-01-28 10:46:27 +10:00
Allan McRae
e54617c7d5 Fix "pacman -U <url>" operations
Commit e6a6d307 detected complete part files by comparing a payload's
max_size to initial_size.  However, these values are also equal when we
use pacman -U on a URL as max_size is set to 0 in that case.  Add a further
condition to avoid that.

Signed-off-by: Allan McRae <allan@archlinux.org>
2020-01-27 17:53:50 +10:00
Dave Reisner
9883015be2 Use c99 struct initialization to avoid memset calls
This is guaranteed less error prone than calling memset and hoping the
human gets the argument order correct.
2020-01-07 11:40:32 +10:00
Allan McRae
e6a6d30793 Handle .part files that are the size of the correct package
In rare cases, likely due to a well timed Ctrl+C, but possibly due to a
broken mirror, a ".part" file may have size at least that of the correct
package size.

When encountering this issue, currently pacman fails in different ways
depending on where the package falls in the list to download.  If last,
"wrong or NULL argument passed" error is reported, or a "invalid or
corrupt package" issue if not.

Capture these .part files, and remove the extension. This lets pacman
either use the package if valid, or offer to remove it if it fails checksum
or signature verification.

Signed-off-by: Allan McRae <allan@archlinux.org>
2019-11-15 23:29:20 +10:00