- also avoid log archiving during individual download jobs via SKIP_LOG_ARCHIVE=yes - I've tested with PARALLEL_DOWNLOADS_WORKERS=16 and it saturates my gigabit link, ghcr.io is great at reads - more than ~16-ish might be too much though |
||
|---|---|---|
| .. | ||
| artifact-reducer.py | ||
| board-inventory.py | ||
| download-debs.py | ||
| info-gatherer-artifact.py | ||
| info-gatherer-image.py | ||
| json2csv.py | ||
| mapper-oci-uptodate.py | ||
| outdated-artifact-image-reducer.py | ||
| output-debs-to-repo-json.py | ||
| output-gha-matrix.py | ||
| output-gha-workflow-template.py | ||
| output-gha-workflow.py | ||
| repo-reprepro.py | ||
| targets-compositor.py | ||