How do we know the AI output is accurate? What observable evidence is there? From the article:
> Inside this huge machine, which is called a synchrotron, electrons are accelerated to almost the speed of light to produce a powerful X-ray beam that can probe the scroll without damaging it. ...
> The scan is used to create a 3D reconstruction, then the layers inside the scroll - it contains about 10m of papyrus - have to be identified. ...
> After that artificial intelligence is used to detect the ink. It's easier said than done - both the papyrus and ink are made from carbon and they're almost indistinguishable from each other.
> So the AI hunts for the tiniest signals that ink might be there, then this ink is painted on digitally, bringing the letters to light.
The ML ink-detection models aren't spitting out Greek text. They're just predicting ink locations, which can be calibrated and cross-checked by manual inspection with the original. (E.g., an earlier article showed how ink particles showed up as a shift in texture.) They operate on a lower level than letters and words, so if the ink does correspond to Greek letters that come out to recognizable Greek words forming sensible passages, it's good evidence that the output is correct.
Presumably, it's possible for errors to slip through, but human labelers can similarly make errors.
I think this is where most of the ML/AI practitioners need to focus on.
Instead of obsessing over other accuracy technique for fully autonomous detection they should focus on human assisted detection to improve the quality/value of the solution. In this case recall/sensitivity is the utmost important accuracy metric and try to get it closer to 100% if possible. Hence even if the ML/AI got it wrong (due to false positive detection), at least the human experts can have a look at it and hopefully eliminate any false positive. It also considerably reduce the burden of the upstream manual based inspection/automation since it is machine automated. This can also mitigate and hopefully prevent false negative detection because false negative detection will not has the opportunity for further inspection/verification by the human experts or trained inspectors. Essentially the AI/ML is functioning as the filter shifting through the massive data very quickly (with minimum or zero false negative), and then the results are then verified by the much slower human experts.
> Presumably, it's possible for errors to slip through, but human labelers can similarly make errors.
That's often an argument for AI systems: Is it better than humans (than avoiding car accidents, reading text, reading x-rays, etc.). But in science we need observable evidence.
From the blog post where they announced 2023 grand prize: they did a couple of things to verify the results, including scanning the same area multiple times and making sure that multiple models produce similar results.
They said that the work was a Greek Epicurean work, but described it as finding fulfillment in the pleasures of life. The Greek Epicureans were of the opinion that avoiding pain and suffering was the object of ethical philosophy, which is not the same thing, at all.
> Epicureans had a very specific understanding of what the greatest pleasure was, and the focus of their ethics was on the avoidance of pain rather than seeking out pleasure.
So whether the description of the work (as GP critiques) is correct, really comes down to whether the definition of 'pleasure' used is as in Epicureanism. Certainly someone unfamiliar would misunderstand it.
> Inside this huge machine, which is called a synchrotron, electrons are accelerated to almost the speed of light to produce a powerful X-ray beam that can probe the scroll without damaging it. ...
> The scan is used to create a 3D reconstruction, then the layers inside the scroll - it contains about 10m of papyrus - have to be identified. ...
> After that artificial intelligence is used to detect the ink. It's easier said than done - both the papyrus and ink are made from carbon and they're almost indistinguishable from each other.
> So the AI hunts for the tiniest signals that ink might be there, then this ink is painted on digitally, bringing the letters to light.
Presumably, it's possible for errors to slip through, but human labelers can similarly make errors.
Instead of obsessing over other accuracy technique for fully autonomous detection they should focus on human assisted detection to improve the quality/value of the solution. In this case recall/sensitivity is the utmost important accuracy metric and try to get it closer to 100% if possible. Hence even if the ML/AI got it wrong (due to false positive detection), at least the human experts can have a look at it and hopefully eliminate any false positive. It also considerably reduce the burden of the upstream manual based inspection/automation since it is machine automated. This can also mitigate and hopefully prevent false negative detection because false negative detection will not has the opportunity for further inspection/verification by the human experts or trained inspectors. Essentially the AI/ML is functioning as the filter shifting through the massive data very quickly (with minimum or zero false negative), and then the results are then verified by the much slower human experts.
That's often an argument for AI systems: Is it better than humans (than avoiding car accidents, reading text, reading x-rays, etc.). But in science we need observable evidence.
From the blog post where they announced 2023 grand prize: they did a couple of things to verify the results, including scanning the same area multiple times and making sure that multiple models produce similar results.
https://scrollprize.org/grandprize#how-accurate-are-these-pi...
News from Scroll 5 - https://news.ycombinator.com/item?id=42955356 - Feb 2025 (3 comments)
First word discovered in unopened Herculaneum scroll by CS student - https://news.ycombinator.com/item?id=37857417 - Oct 2023 (210 comments)
Vesuvius Challenge 2023 Grand Prize awarded: we can read the first scroll - https://news.ycombinator.com/item?id=39261861 - Feb 2024 (216 comments)
https://en.wikipedia.org/wiki/Epicureanism
What the OP actually says is 'fulfilment can be found through the pleasure of everyday things' which is very much in line with Epicurean thinking.
https://en.wikipedia.org/wiki/Epicureanism#Pleasure
> Epicureans had a very specific understanding of what the greatest pleasure was, and the focus of their ethics was on the avoidance of pain rather than seeking out pleasure.
So whether the description of the work (as GP critiques) is correct, really comes down to whether the definition of 'pleasure' used is as in Epicureanism. Certainly someone unfamiliar would misunderstand it.
Deleted Comment
Dead Comment
Dead Comment