Webpage Display
Media: JPEG XL
With this feature enabled, Nightly supports the JPEG XL (JXL) format. This is an enhanced image file format that supports lossless transition from traditional JPEG files. See bug 1539075 for more details. Webpage Display
Media: JPEG XL
With this feature enabled, Nightly supports the JPEG XL (JXL) format. This is an enhanced image file format that supports lossless transition from traditional JPEG files. See bug 1539075 for more details.And when during 2024 I looked for a replacement after it died, I was so lucky that I got one with an UEFI that refused to load whatever distro I tried from SSD, while having no issues loading the same, if it was on external box over USB.
I have kept a screenshot of the firmware setup for years to remind me where the option can be found; looking at it now:
menu: Security > "Select UEFI file as trusted"
That would bring up a file-chooser where one can navigate the files in the EFI System Partition and select the distro's initial boot-loader file. For example, for a Debian install it would either or both of:
/EFI/debian/shimx64.efi /EFI/debian/grubx64.efi
Do you remember the name of the product?
PC Engines APU2, AMD x86_64, 4-core, 4GiB, 3x Gigabit Ethernet, 3 x mini PCIe, SIM slot, USB 3, Serial, SATA ports. Mine has dual band WiFi in one mPCIe, SSD in another.
Turris Mox, Marvel aarch64. This can expand via plug and go via a range of extension modules. I've got one with 25 Gigabit (3 x 8-port modules) Ethernet, 1 x SFP, 5 x USB3, Wifi, Serial.
this means your your drive need to be dead for raid to do it's protection and this is usually the case.
the problem is when starts corrupting data it reads of writes. in that case raid have no way to know that and can even corrupt data on the healthy drive. (data is read corrupted and then written to both drives)
the issue is that there are 2 copies of the data and raid have no way of telling with one is correct so it's basically flips a coin and select one of them, even if filesystem knows that content makes no sense.
that's basically biggest advantage of filesystems like zfs or btrfs that manage raid themselves, they have checksums and that know with copy is valid and are able to recover and say that one drive appears healthy but it's corrupting data so you probably want to replace it
I have a workstation with four storage devices; two 512GB SSDs, one 1GB SSD, and one 3TB HDD. I use LUKS/dm_crypt for Full Disk Encryption (FDE) of the OS and most data volumes but two of the SSDs and the volumes they hold are unencrypted. These are for caching or public and ephemeral data that can easily be replaced: source-code of public projects, build products, experimental and temporary OS/VM images, and the like.
dmsetup ls | wc -l
reports 100 device-mapper Logical Volumes (LV). However only 30 are volumes exposing file-systems or OS images according to: ls -1 /dev/mapper/${VG}-* | grep -E "${VG}-[^_]+$" | wc -l
The other 70 are LVM raid1 mirrors, writecache, crypt or other target-type volumes.This arrangement allows me to choose caching, raid, and any other device-mapper target combinations on a per-LV basis. I divide the file-system hierarchy into multiple mounted LVs and each is tailored to its usage, so I can choose both device-mapper options and file-system type. For example, /var/lib/machines/ is a LV with BTRFS to work with systemd-nspawn/machined so I have a base OS sub-volume and then various per-application snapshots based on it, whereas /home/ is RAID 1 mirror over multiple devices and /etc/ is also a RAID 1 mirror.
The RAID 1 mirrors can be easily backed-up to remote hosts using iSCSI block devices. Simply add the iSCSI volume to the mirror as an additional member, allow it to sync 100%, and then remove it from the mirror (one just needs to be aware of and minimising open files when doing so - syncing on start-up or shutdown when users are logged out is a useful strategy or from the startup or shutdown initrd).
Doing it this way rather than as file backups means in the event of disaster I can recover immediately on another PC simply by creating an LV RAID 1 with the iSCSI volume, adding local member volumes, letting the local volumes sync, then removing the iSCSI volume.
I initially allocate a minimum of space to each volume. If a volume gets close to capacity - or runs out - I simply do a live resize using e.g:
lvextend --resizefs --size +32G ${VG}/${LV}
or, if I want to direct it to use a specific Physical Volume (PV) for the new space: lvextend --resizefs --size +32G ${VG}/${LV} ${PV}
One has to be aware that --resizefs uses 'fsadmn' and only supports a limited set of file-systems (ext*, ReiserFS and XFS) so if using BTRFS or others their own resize operations are required, e.g: btrfs filesystem resize max /srv/NAS/${VG}/${LV} pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup": lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
btrfs filesystem resize max /srv/backup
I spent December last year looking for a new bank to move to. One of my criteria (not the most important but it was on the list) was better-than-SMS 2FA.
No one offers it. There may be some niche, loosely-based finance org that does but none of the banks or Building Societies do.
So, unfortunately, you need it in the UK.