Don't regret it! It allowed you to find a drive failure before a second drive failed, and allowed you to recover your data while it was still recoverable! This is a good thing!
Archival data is a tricky problem. If you don't regularly read it, you may find that when you do need to read it, that the sector has suffered some bit rot. What's one of the most likely cases where you have to read archived data? When a hard drive fails and RAID is doing a rebuild...
So, by all means, run a ClamAV scan! Or better yet, run a ZFS scrub monthly or so. I've been a huge fan of ZFS for my archive data, mostly historic photos and Google takeout data now, because it can detect silent corruption and can do scrubs to verify and repair any hard drives sectors that have problems.
https://status.gitlab.com/pages/history/
They do have some latency or slowness issues, but couldn't find like whole system down thing,
Like in one of the comments here, reminded me of 2017 incident, https://about.gitlab.com/blog/2017/02/10/postmortem-of-datab... They should have improved a lot by now, but still I am curious, why such large or frequent downtimes are happening to GitHub. Is it due to making it more open for teams with Private repos, and more perks along with quarantine and WFH things
Also the GitLab sluggishness reminds me of their daemon which kills the server to control memory leaks[1], although this probably isn't the main cause of the platform's slowness.
[1]: https://about.gitlab.com/blog/2015/06/05/how-gitlab-uses-uni...