Hi! Perma is made by the Harvard Library Innovation Lab, which I direct, and I wrote a bunch of the early code for it back in 2015 or so.
For HN readers, I'd suggest checking out https://tools.perma.cc/, where we post a bunch of the open source work that backs this. Due to the shift from warc to wacz, (a zipped-web-archive format developed by WebRecorder), it's now possible to pass around fully interactive high fidelity web archives as simple files and host them with client side javascript, which opens up a bunch of new possibilities for web archive designs. You can see some tech demos of that at our page https://warcembed-demo.lil.tools/ , where each page is just a static file on the server and some client side javascript.
It's best to think of Perma.cc itself, the service, as some UX and user support wrapping to help solve linkrot primarily in the law journal, courts, law journals, and journalists area (for example, dashboards for a law journal to collaborate on the links they're archiving for their authors), and our work on this as building from that usecase to try to make it easier for everyone to build similar things.
I saw some mentions of the Internet Archive, which is great, and is also kind enough to keep a copy of our archives and expose them through the Wayback Machine. One thing I've been thinking about recently in archiving is that there's a risk to overstandardizing -- you don't want things too much captured with the same software platforms, funded through the same models, governed by the same people, exposed through the same interfaces, etc. There's supposed to be thousands of libraries, not one library. Unlike "don't roll your own crypto," I'd honestly love to see more people roll their own archives.
My first question was "If this is a free service, how do I know it will still be around in even a few years?". This was answered by your comment that it is (or at least appears to be?) funded by Harvard.
In which case, why isn't this prominently displayed on the main page? Or why not use a Harvard library URL, which will significantly boost the trust level? Especially vs a CC TLD which are known to be problematic?
It is on core Harvard funds, and we also have paid accounts used by law firms and journalists.
As an innovation lab we often minimize Harvard branding with project websites because it's more instructive to win or lose on our own merits than based on how people feel about Harvard, in either direction.
I personally see the benefit as potentially having internet archive stopping being the only game in town, but even that comes with certain costs ( which may not be great to the community as a whole -- depending on who you ask ).
I would love to hear your perspective on where you stand as related to other providers of similar services.
I think the biggest distinction is between archiving platforms made primarily for authors and primarily for web crawlers.
If you're an author (say, of a court decision) and you archive example.com/foo, Perma makes a fresh copy of example.com/foo as its own wacz file, with a CPU-intensive headless browser, gives it a unique short URL, and puts it in a folder tree for you. So you get a higher quality capture than most crawls can afford, including a screenshot and pdf; you get a URL that's easy to cite in print; you can find your copy later; you get "temporal integrity" (it's not possible for replays to pull in assets from other crawls, which can result in frankenstein playbacks); and you can independently respond to things like DMCA takedowns. It's all tuned to offer a great experience for that author.
IA is primarily tuned for preserving everything regardless of whether the author cared to preserve it or not, through massive web crawls. Which is often the better strategy -- most authors don't care as much as judges about the longterm integrity of their citations.
This is what I'm getting at about the specific benefits of having multiple archives. It's not just redundancy, it's that you can do better for different users that way.
With the internet archive, the purpose seems to be for public archiving. One could imagine a use-case where you want non-public archives, and are therefore not subject to any take-down requests, especially if they are considered court evidence for example.
By paying directly for your links to be archived, it directly helps fund the service and therefore keep it going. You would want to see some guarantees in the contract about pricing if you were to long-term rely on the service.
Irrelevant. The point is that there shouldn't be a single archive for anything, because then it has the longevity of the operators. Who can say whether Harvard or the IA will close its service first? Why choose ?
Is there any concept of signing data at time of archive, and verification at time of access, to prove it is not later tampered with, say by a bribed sysadmin?
Similarly are there any general supply chain integrity measures in place, such as code review of dependencies, reproducible builds, or creating archives reproducibly in independently administrated enclaves?
You note archives could be used for instances like Supreme Court decisions, so any anyone with power to tamper with content would certainly be targeted.
We're coauthors on the wacz-auth spec, which is designed to solve this sort of thing by signing archives with the domain cert of the archive that created them. If you cross-sign with a private cert you can do pretty well with this approach against various threat models, though it has to be part of a whole PKI security design.
I think the best approach for high stakes archiving is to have a standard for "witness APIs" so that you could fetch archives from independent archiving institutions. That also solves for the web looking different from different places. That hasn't gelled yet, though.
What happens if you get a lawsuit or injunction demanding information removal or alteration? What if somebody archives a born secret or something sensitive?
Heads up that the .cc TLD is frequently used for malicious purposes and will likely get blocked by a lot of networks.
When I've worked on spam prevention in the past, that TLD always comes up disproportionately often. I've never personally built a filter that blocks the entire TLD, but I'm sure from looking at the data that people with stricter compliance requirements have.
The Anti-Phishing Working Group ranked the TLD the second-worst in the ratio of phishing domains to total registrations, with the highest total volume of phishing (page 13):
It's unique in that, if you opt out of the paid account route, you need someone like a library to sponsor your access, and then when you archive something, it is akin to giving it to your library to store.
Right, but new product or not if you use this as a solution for permalinks you are running the risk that in certain types of networks—especially those that the target audience for academic writing often operates in—people will not be able to access your links.
That might be worth the trade-off, and it might well be that the service is well-known enough that even networks that block the entire TLD make an exception for Perma.cc. But I wouldn't assume that to be the case without validating it first.
I also think it's worth just calling out bad TLDs when we see them so that people don't think it's okay to copy. Even if Perma.cc is well known enough to avoid the problem, your new app won't be.
Permanent, until they go out of business. We should just standardize on archive.org and figure out a way to distribute redundant copies of its data around in such a way that it can survive even if the original Internet Archive goes down.
I hate to push blockchain stuff, but something like IPFS might actually be a good idea here.
It’s run by the Harvard Law Library (i.e. backed by a multibillion-dollar university that is substantially older than the country it’s located in) and operated as a decentralized network across multiple public and private library systems.
Like any service, it might shut down due to lack of interest, but I doubt Harvard Law is at risk of “going out of business”.
Archive.org has had it's fair share of problems recently as well. I'm still mad Google dropped their own cache and just expected the IA to pick up the slack.
Someone's still got to dedicate a chunk of their disk to retaining a copy, though.
These folks send copies of all their archives to IA, so it's no less permanent than IA. And being focused on a specific niche (with deep-pocketed backers) means it's less likely to blow up for unrelated reasons.
IPFS is basically content-addressed HTTP, and it's really slow, and there's no way to discover all the stuff that needs to be redundantly archived (which makes sense because anyone can host anything).
Prospective users are understandably concerned that perma.cc will go out of business. No institution can guarantee that it will exist in perpetuity. But perma.cc has at least published a contingency plan: https://perma.cc/contingency-plan.
"Please note that this is a statement of Perma.cc’s present intent in the event the project winds down. Perma.cc may revise or amend this page at any time. Nothing on this page is intended to, nor may it be read to, create a legal or contractual right for users or obligation for Perma.cc, under the Perma.cc Terms of Use or otherwise."
So, yeah, nothing is different than anyone else, other than they have a "cunning plan" that can easily get shitcanned at anyone's whim
PURL is in the same space as w3id.org, not perma.cc. Purl and w3id work by creating stable URLs thar can redirect to a (potentially changing) origin, perma.cc/archive.org/archivebox create WARC archives or the content at a given instant.
Is the best counter here to acquire a brand tld to operate themselves (setting aside all the linkrot it generates)? They've certainly got the resources when you compare against other brand tlds that this could have been an option.
I used the same unique online alias from age ten to eighteen.
I used to be able to google it and see hundreds of results. Dozens of forums I posted on. Dozens of games I played. In my twenties, I'd do this for the nostalgia of reading posts I'd written in my preteen era.
If the world has taught me anything, it's that nothing is permanent, and nothing is perfect. Forums from days of yore are littered with tinybucket 404 pictures and anonymous Imgur images are gone. We like to imagine that the internet will stay the way it is forever, but I dont believe it. Free internet services like email, file uploads, etc. wont last forever. The idea is amazing, and exactly what we need for a constantly changing internet, but the only thing that is forever is nothingness.
For HN readers, I'd suggest checking out https://tools.perma.cc/, where we post a bunch of the open source work that backs this. Due to the shift from warc to wacz, (a zipped-web-archive format developed by WebRecorder), it's now possible to pass around fully interactive high fidelity web archives as simple files and host them with client side javascript, which opens up a bunch of new possibilities for web archive designs. You can see some tech demos of that at our page https://warcembed-demo.lil.tools/ , where each page is just a static file on the server and some client side javascript.
It's best to think of Perma.cc itself, the service, as some UX and user support wrapping to help solve linkrot primarily in the law journal, courts, law journals, and journalists area (for example, dashboards for a law journal to collaborate on the links they're archiving for their authors), and our work on this as building from that usecase to try to make it easier for everyone to build similar things.
I saw some mentions of the Internet Archive, which is great, and is also kind enough to keep a copy of our archives and expose them through the Wayback Machine. One thing I've been thinking about recently in archiving is that there's a risk to overstandardizing -- you don't want things too much captured with the same software platforms, funded through the same models, governed by the same people, exposed through the same interfaces, etc. There's supposed to be thousands of libraries, not one library. Unlike "don't roll your own crypto," I'd honestly love to see more people roll their own archives.
Happy to answer any questions!
In which case, why isn't this prominently displayed on the main page? Or why not use a Harvard library URL, which will significantly boost the trust level? Especially vs a CC TLD which are known to be problematic?
As an innovation lab we often minimize Harvard branding with project websites because it's more instructive to win or lose on our own merits than based on how people feel about Harvard, in either direction.
- Why is it better than internet archive?
I personally see the benefit as potentially having internet archive stopping being the only game in town, but even that comes with certain costs ( which may not be great to the community as a whole -- depending on who you ask ).
I would love to hear your perspective on where you stand as related to other providers of similar services.
If you're an author (say, of a court decision) and you archive example.com/foo, Perma makes a fresh copy of example.com/foo as its own wacz file, with a CPU-intensive headless browser, gives it a unique short URL, and puts it in a folder tree for you. So you get a higher quality capture than most crawls can afford, including a screenshot and pdf; you get a URL that's easy to cite in print; you can find your copy later; you get "temporal integrity" (it's not possible for replays to pull in assets from other crawls, which can result in frankenstein playbacks); and you can independently respond to things like DMCA takedowns. It's all tuned to offer a great experience for that author.
IA is primarily tuned for preserving everything regardless of whether the author cared to preserve it or not, through massive web crawls. Which is often the better strategy -- most authors don't care as much as judges about the longterm integrity of their citations.
This is what I'm getting at about the specific benefits of having multiple archives. It's not just redundancy, it's that you can do better for different users that way.
With the internet archive, the purpose seems to be for public archiving. One could imagine a use-case where you want non-public archives, and are therefore not subject to any take-down requests, especially if they are considered court evidence for example.
By paying directly for your links to be archived, it directly helps fund the service and therefore keep it going. You would want to see some guarantees in the contract about pricing if you were to long-term rely on the service.
Similarly are there any general supply chain integrity measures in place, such as code review of dependencies, reproducible builds, or creating archives reproducibly in independently administrated enclaves?
You note archives could be used for instances like Supreme Court decisions, so any anyone with power to tamper with content would certainly be targeted.
I think the best approach for high stakes archiving is to have a standard for "witness APIs" so that you could fetch archives from independent archiving institutions. That also solves for the web looking different from different places. That hasn't gelled yet, though.
When I've worked on spam prevention in the past, that TLD always comes up disproportionately often. I've never personally built a filter that blocks the entire TLD, but I'm sure from looking at the data that people with stricter compliance requirements have.
The Anti-Phishing Working Group ranked the TLD the second-worst in the ratio of phishing domains to total registrations, with the highest total volume of phishing (page 13):
https://docs.apwg.org//reports/APWG_Global_Phishing_Report_2...
https://en.wikipedia.org/wiki/Perma.cc
It's unique in that, if you opt out of the paid account route, you need someone like a library to sponsor your access, and then when you archive something, it is akin to giving it to your library to store.
That might be worth the trade-off, and it might well be that the service is well-known enough that even networks that block the entire TLD make an exception for Perma.cc. But I wouldn't assume that to be the case without validating it first.
I also think it's worth just calling out bad TLDs when we see them so that people don't think it's okay to copy. Even if Perma.cc is well known enough to avoid the problem, your new app won't be.
Deleted Comment
I hate to push blockchain stuff, but something like IPFS might actually be a good idea here.
Like any service, it might shut down due to lack of interest, but I doubt Harvard Law is at risk of “going out of business”.
https://en.wikipedia.org/wiki/Persistent_uniform_resource_lo...
Regardless of institutional gravitas, projects without wide uptake are mostly doomed on a 20-year horizon.
Harvard has a much larger population than the Cocos islands, I don't know why this project decided to rely on a country code tld
Wait until they publicly criticise Musk.
Having said that, there are a variety of different archives that Wikipedia uses for backups - Perma.cc is included on that list. https://en.m.wikipedia.org/wiki/Wikipedia:List_of_web_archiv...
There's also projects like Archive Box that allows for self hosted backups of websites https://archivebox.io/
These folks send copies of all their archives to IA, so it's no less permanent than IA. And being focused on a specific niche (with deep-pocketed backers) means it's less likely to blow up for unrelated reasons.
So, yeah, nothing is different than anyone else, other than they have a "cunning plan" that can easily get shitcanned at anyone's whim
They can only promise to do their best.
It used to be hosted at purl.org and run by the OCLC but in 2016 it was transferred to the Internet Archive.
https://web.archive.org/web/20161002094639/https://www.oclc....
I used the same unique online alias from age ten to eighteen.
I used to be able to google it and see hundreds of results. Dozens of forums I posted on. Dozens of games I played. In my twenties, I'd do this for the nostalgia of reading posts I'd written in my preteen era.
Now, there are just seven results.