Applying works fine with a 20220220135625 version, but it won’t be
rolled back in the right order. Fortunately this action is idempotent
so we can just rename and reapply it with a new id.
To also not break large-scale rollbacks past 2022 for anyone
who already applied it with the old id, keep a stub migration.
This promotes and expands our existing optional migration.
Based on usage statistics from several instances, see:
https://akkoma.dev/AkkomaGang/akkoma/issues/764
activities_hosts is now retained after all since it’s essential
for the "instance" query parameter of *oma’s public timeline to
reliably work in a reasonable amount of time. (Although akkoma-fe has
no support for this feature and apparently barely anyone uses it.)
activities_actor_index was already dropped before in
20221211234352_remove_unused_indices; no need to drop it again.
Birthday indices were introduced in pleroma starting with
20220116183110_add_birthday_to_users which is past the
last common migration 20210416051708.
The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.
Documentation was already clear on this only stripping GPS tags.
But there are more potentially sensitive metadata tags (e.g. author
and possibly description) and the name alone suggests a broader effect.
Thus change the filter to strip all metadata except for colourspace info
and orientation (technically it strips everything and then readds
selected tags).
Explicitly stripping CommonIFD0 is needed since -all does not modify
IFD0 due to TIFF storing some actual image data there. CommonIFD0 then
strips a bunch of commonly used actual metadata tags from IFD0, to my
understanding leaving TIFF image data and custom metadata tags intact.
As of exiftool 12.57 both formats are supported, but EXIF data is
optional for JXL and if exiftool doesn’t find a preexisting metadata
chunk it will create one and treat it as a minor error resulting in
a non-zero exit code.
Setting -ignoreMinorErrors avoids failing on such uploads.
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing
This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/670
Alongside moving to certbot's nginx plugin, also use conf.d instead of
recreating the sites-{available,enabled} setup that Debian/Ubuntu uses.
Furthermore, also request a certificate for the media domain at the same
time since that's now required.