Hi all, I made this. I just received a notice from Zeit that they blocked it:
> I am writing to let you know we have blocked your deployment: whereisscihub.now.sh
> This is because the deployment contained illegal content.
> Please let me know if this is not the case or if you have any questions.
I hadn't noticed it was on Hacker News, but that explains the sudden attention.
While I don't believe the site was illegal (it's just a link, after all, and proxied from Wikidata - so that would be illegal too?), I understand that Zeit are not too happy about it.
The source code to the website is at [0], if you want to run it yourself. That said, a recent Wikidata policy change resulted in the data not being great any more [1]. For now, I'd recommend just visiting the Wikipedia page on Sci-Hub to get a recent URL, or use an alternative [3].
Edit: I should also add that this was just an afternoon project I did once, and I should plug the main thing I'm working on in this area. I'm trying to remove the incentive for academics to publish in "top" (but closed access) journals: https://plaudit.pub/
Basically, if you post a link and know it links to illegal content, you are in violation of copyright law. If you do not know, you're not in violation, but upon being informed, you do know and have to take down the link.
In case of Sci-Hub, it would be reasonable to assume somebody who made a website with the purpose to link to the ever-changing Sci-Hub domain, that is ever changing because of copyright takedowns, then I'd think a court would find it reasonable for you to know you're linking to something violating somebody else's copyrights and thus are in violation of copyright law yourself, in EU jurisdictions at least.
We actually had a chat about that on Twitter :) That said, the site didn't really link to infringing material - just to the homepage of Sci-Hub. Technically, you could also append something to the URL to make you end up at a page with infringing content, but there was no explicit link to there from the site, and it can be used for non-infringing content as well.
Linking to "illegal" content is also illegal in some EU country's. Guides or software to get around their censorship is also "illegal" and linking to "illegal" content is ofter attacked under this rule.
You think they would realise how absurd the whole thing is when then have to resort to censorship of censorship.
What you did is not illegal at all, but the law doesn't really matter anymore, so I guess you have fallen into the "pissing off powerful interests" category, which is just as bad as doing illegal stuff. You should consider putting this up as an onion at the least, is quite helpful for some of us. Kudos:)
Hey Vinnl, would it be alright if I dm you? I think there’s a way to create your service using Handshake.org such that it can’t be shutdown.
For context, Handshake is a new project aiming to create a distributed certificate authority and naming system. Domain names on Handshake are very difficult to shut down, so you could have a sci hub domain on Handshake that continuously tracks the sci hub servers. Users would be able to always access sci hub from the same domain in this case.
I don't think HN has DM functionality, but feel free to contact me on any platform.
In any case, feel free to take the source code and to deploy it elsewhere.
That said, it might be best to have Sci-Hub adopt Handshake directly? I think you might be able to get in touch with Alexandra Elbakyan, the person behind Sci-Hub, through its Twitter: https://twitter.com/Sci_Hub
I am of the opinion knowledge should be free and easily accessible. Research papers through sci-hub only helps to make knowledge easily accessible. Humanity's progress depends on people who shared their knowledge, not by people who use it as a tool to rule and hold on to power.
It will be nicer if whole sci-hub is able to move to IPFS network and accessible through a domain. The problem is IPFS do not offer anonymity of the nodes hosting the content. So publisher can sue any of the nodes being part of the network providing a chunk of data. Even though they are not liable as intermediary, but now a days it's pretty hard to defend in court and like the default judgement in case of sci-hub will happen with the node owners.
Hopefully libp2p and IPFS, IPNS can provide some way to be performant and anonymous.
Indeed some countries like India, China, Russia will even punish intermediary for hosting such content (there is no Safe Harbor laws in these countries). In USA and other economies hiring a lawyer and getting access to legal remedy is so expensive that even with Safe harbor law, one cannot defend unless have deep pockets like google, facebook, microsoft etc.
> It will be nicer if whole sci-hub is able to move to IPFS network and accessible through a domain.
I think you have an misconception about how sci-hub works. Most of the content you can access trough sci-hub is not hosted by sci-hub, what happens when you request a fresh new paper is that your request is eventually served by a computer in the network of one of the universities with subscription to the journal which published that paper, using donated or stolen credentials of some high profile scientists if needed. Sci-hub already uses IPFS to cache some papers, it can not move to IPFS network because it is not a library of content like LibGen (which is often confused to) but an actual hub which decides how to serve your request in real time.
Sci-Hub only access is through a university network if it's the first time anyone has requested that article through Sci-Hub. It does cache the result on its own servers, though, so subsequent requests will be served from those.
(It used to cache them on LibGen, and I think it does still store them there, but I seem to recall it now uses its own servers as well.)
Thanks for clarifying it, I didn't know that it uses stored credentials to serve. Still it will be nice if there is a decentralized network storing these papers in case those journals go away or out of business.
This research papers are anyways funded indirectly and authors of this paper are not really dependent on income from these papers, but the resulting data, experiments and experience they gained while working on it. Reading these papers is only a very small step to know something, but a necessary one.
IPFS is actually a better fit for this than it might appear.
It doesn't pro-actively replicate content, so it isn't like bit torrent where everyone is uploading.
Instead, it provides resilience, where the actual location of sci-hub could move around, but requesters could still get access to the content. The rest of the network acts to help with routing, but won't be serving data they haven't accessed themselves.
I haven't been following gnunet.org for quite a while but I remember from years ago that it offers plausible deniability for nodes, aside from solid sounding review of their crypto.
> I am of the opinion knowledge should be free and easily accessible.
I understand hatred for academic publishers, however this argument makes no sense.
It takes work to produce human knowledge. Writing a book, creating educational materials, and writing research all takes time, just like it takes time to build a house.
> It takes work to produce human knowledge. Writing a book, creating educational materials, and writing research all takes time, just like it takes time to build a house.
My understanding is that most / all articles in academic journals are written by researchers who are not paid by the journal. Then, the articles are peer reviewed by another set of researchers - who are also not paid by the journal. Then, those same researchers are charged by the journal if they want to have access to the article. It's very unclear what value the journal actually brings - outside of slapping a particular well known name on the publication they don't do much. All of the people that actually do the hard work receive no direct compensation.
I'm a researcher. I write the articles in their books. They don't pay me for that. All of the articles are written by people like me. All of the articles are reviewed and edited by people like me. I even do the typesetting. I get paid by a tax-funded university. Publishers (such as Springer) use their money not to produce knowledge, but to sue people, market their conferences and journals, and pay their administrators and shareholders.
Information freedom radicalism is not in any way mutually exclusive with compensating creators.
Grants, shareware, donationware, donation subscriptions (like patreon), crowdfunding, and the list goes on and on- of all the ways information creation is compensated prior to the information being given away for free.
Copying information takes virtually no work. It's not unreasonable to hold that the concept of "intillectual property" is morally bankrupt.
Nobody here suggested creators shouldn't be compensated.
The research is already funded differently, usually through (public) grants, salaries of the researchers and everybody on their teams (taxes, tuition), or even patent revenues.
Nobody makes money off of publications except for the publishers. Some publishers even demand money from the authors for their publishing "services".
The publishers themselves add very little value, as e.g. peer reviews are done for "free" (aka on the universities or peer-reviewers dime), or rather for the prestige that comes from being a reviewer.
What's more, a lot of the authors of what's published actually want to publish their work for everybody else to see. But they are currently caught between a rock and a hard place: they need the "reputation" that comes from publishing in "prestigious" journals but those journals or rather the parent publishing companies require signing a contract saying that the article may not be published elsewhere, not even on the authors personal website.
In short, the actual research work has been paid for already, academics (authors and peer reviewers) are fucked, the public (who often paid for the research already) is fucked, and the publishers are money-printing-machines.
You could easily replace the publishers with a few non-profit orgs that are funded by universities for a couple of hundred bucks per university per month, and make all the content available for "free" to everybody on the planet in perpetuity.
There are over 25,000 universities worldwide, so if every university paid e.g. 200 bucks a month, it would come out as $5M per month, which is enough to fund the publishing end of every last scientific journal in existence and then some (assuming we keep not paying authors and peer reviewers through such publishing, like it is now)
If I was the regulator in charge, the first thing I would do is require all work that is - in part - paid for by public taxpayer funds or public university funds to be published under a free license, since the public already paid for the work. Similar to e.g. works created by US government - and some other governments - is automatically in the public domain already; that's why e.g. all those nice NASA pictures are "freely" available to everybody already, because the US federal tax payers already paid about $35 bucks on average per year to fund ALL of NASA.
And how exactly do publishers contribute to said work? Research is done by researchers; papers are written by researchers; and papers are reviewed by other researchers for free...
Of course there's development costs to publications. It doesn't follow that we thus must lock publications behind paywalls or clutter them with ads.
It's perfectly consistent to both want the work funded and want it free and easily accessible. When someone says knowledge should be free, it's unreasonable and illogical to assume they are saying that knowledge work shouldn't be funded.
The discussion of how to fund work without paywalls or ads is worth having. The straw-man argument about whether we should fund work is not.
They used to have a .onion address, which I assumed would continue to be the most reliable way in, but it's been down for a long time. I'm surprised, it seems like Tor would be the best way to remain up and accessible.
Wikidata, actually, which Wikipedia unfortunately does not yet use as a data source. But yeah, also a Wikimedia project, and also maintained by volunteers.
Sci-Hub is such a marvelous resource. I like to read research papers for fun sometimes but I'm, let's put it gently, no scientist. So it'd be a huge waste of money for me to pay the steep prices of scientific journal subscriptions just to read 1-2 articles per month where I understand at most 75% of it.
However, I still feel bad about accessing them without any benefit to the scientific teams. Is there any way to give back to the people whose work I'm reading or at least some kind of science-related general fund or charity that I could contribute to?
Scientists don’t get paid anything regardless of whether or not you pay a subscription/article fee to the science journals. The best thing to do to benefit scientists (usually) is to spread awareness of their work, as that is the main currency in academia that determines career advancement and future fundraising. Ideally you’re promoting it to other academics (so that they generate trackable citations), but general publicity also helps.
Paying taxes is (at least in western countries where i know a tiny bit about academia) sufficient to support the authors.
If you want to go beyond that: reach out / blog / tweet / etc about papers that peek your interest.
If someone else wrote about a paper of mine, I'd be over the moon!
Those teams receive nothing from publishing either, other than CV credit and occasionally, a copy of the journal. Many of them put their pre-prints on academia.edu and often will email it to you if you ask.
It's complicated, but places like Wiley and Elsevier take much more than they give.
It’s not an answer to your question, but when you pay JSTOR or whomever, none of that goes back to the scientists. It’s not like royalties for an author.
Yes, don't feel bad, we'd never see it anyway. I agree with the proposals of the others, rather write an email to the corresponding author or a blog post about it.
Do not feel bad. You are the benefit. More precisely, your education is. We should try to maximize people's potential and share knowledge with everyone, who wants to learn, instead of hiding it behind paywalls.
Off-topic: I think it would be awesome to have a browser extension that collected HTTP codes as achievements, and you get a pretty little badge every time you run into a new code.
executing `curl -v https://whereisscihub.now.sh` shows an x-now-id header, which means that the request has reached Zeit Now servers and it's Zeit Inc who blocked it.
Same result here in Brazil. An HTTP status code implies the domain name was properly resolved to the IP of the HTTP server. The page must have been removed by the host.
> An HTTP status code implies the domain name was properly resolved to the IP of the HTTP server.
This is not true. Your ISP can in theory route the requested address to whatever they want. In the Netherlands TPB is blocked and my ISP returns a 200 status code which shows their blocking landing page. Other ISPs give a 30x redirect to their domain.
Basically the ISP can do a man in the middle attack.
The sooner academic publishers die the better—not a very original thought, I'm aware.
There's still very much a distinction between (self-)publishing on the arXiv and publishing in a 'real' journal/conference within the academic community, and I think it comes down to two factors:
1. The arXiv moderation process has a much lower bar; you see some pretty rubbish papers (often from large tech companies) make their way onto the arXiv which would never be published in a 'real' journal; ultimately this isn't a shortcoming (the arXiv has had a monumental impact on academia), but rather that the arXiv isn't trying to be a peer-reviewed journal.
2. Visibility/'impact' is lower on the arXiv; there are ~14k submissions per month, so inevitably the signal to noise ratio is low.
I feel what we really need is for a few universities to put the many millions they spend on annual subscriptions into some kind of endowment to pay for proper editorial boards for a peer-reviewed arXiv instead, open access, perhaps with a token fee for submission (or a slightly higher academic affiliation bar). I think if two or three big universities from each of the US/UK/Europe suddenly made this change we would see the death of academic publishing in months.
Just an arXiv with peer-review is not enough. There is a lot of room for innovation and improvement. There was quite an interesting discussion[1] about creating something in between Overleaf[2], ArXiv[3], Git, and Wikipedia, moreover with the ability to do a peer-to-peer review, discussion, and social networking. Check out the last[4] article in that series. There are a few implementations, albeit not covering all features, like Authorea[5] and MIT's PubPub[6] (it is the open source[7]). See also GitXiv[8]. See also the Publishing Reform[9] project. Moreover, there is quite an interesting initiative from DARPA, to create the scientific social network of a kind - Polyplexus[10].
There are experiments also in terms of peer reviewed open access journals that work as arxiv overlays. In mathematics we have discrete analysis [1] and sigma [2] (at least) which are quite good. Some journals started moving to a fully free open access model, e.g. JEP [3], Documenta Mathematica [4], Acta Mathematica [5] and Annales IF [6] (again at least).
I agree with @xvilka that there is lots of room for innovation and exploration, but even in the more traditional setting there is some movement coming from the smaller realities. It seems to me that the big publisher monopoly, which at least in my field is completely unjustified, is the bigger obstacle.
This covers a lot of what we're building at OpenReview.net. Obviously we still long way to go, but we already have some of the major AI/ML conferences using our platform to accept submissions, perform paper matching, and host peer-review forums.
At least in astrophysics, the standard is to let your paper go through peer review first, then post the paper once accepted. In the arXiv comments, you then put "Accepted by <journal>". Besides offering the papers for free, the arXiv is the main "news feed" that astronomers use (much more convenient than having to check all the different journals). Imo this system works quite well. There's indeed papers posted that are not yet peer-reviewed, but these you read with a more skeptical view.
The large number of submissions is indeed a problem for other fields (I think mostly computer science?). I'm not sure what the best solution is for that. A voting system (where you see a mix of new and popular submissions), together with some content filtering, can help the reader. However, this would probably lead to some important papers getting burried.
I'm not convinced they are fully useless. Eg even an anti-vax person will understand that academic publishing has some solidness, even if they think it's'the establishment'. Explaining to a layman what arxiv does and how solid or not solid science on the site is is a lot more difficult.
In libgen, you can search for stuff and find books/docs that match the query, but unfortunately that appears to not be possible on sci-hub, which is inconvenient.
That said, both are incredible resources for academics, researchers, students or even folks just wanting to read up on a subject in more depth than Wikipedia has.
It's not really acting as a search engine, no. You can use Scholar for that, find the DOI, and paste it into sci-hub.
Speaking of which, Scholar should really be showing DOIs on their search results. That is, the DOI should appear as a top-level clickable link from every result entry. I've tried to suggest many things like this to Scholar over the years, but to no avail... Like Google in general. Just a feature request into the void.
They would never display it. It's basically the same reason why they won't show a complete url anymore, the more time you spend on Google the more money they get.
> I am writing to let you know we have blocked your deployment: whereisscihub.now.sh
> This is because the deployment contained illegal content.
> Please let me know if this is not the case or if you have any questions.
I hadn't noticed it was on Hacker News, but that explains the sudden attention.
While I don't believe the site was illegal (it's just a link, after all, and proxied from Wikidata - so that would be illegal too?), I understand that Zeit are not too happy about it.
The source code to the website is at [0], if you want to run it yourself. That said, a recent Wikidata policy change resulted in the data not being great any more [1]. For now, I'd recommend just visiting the Wikipedia page on Sci-Hub to get a recent URL, or use an alternative [3].
Edit: I should also add that this was just an afternoon project I did once, and I should plug the main thing I'm working on in this area. I'm trying to remove the incentive for academics to publish in "top" (but closed access) journals: https://plaudit.pub/
[0] https://gitlab.com/Flockademic/whereisscihub/
[1] https://gitlab.com/Flockademic/whereisscihub/issues/9
[2] https://en.wikipedia.org/wiki/Sci-Hub
[3] https://sci-hub.now.sh/ (though also hosted by Zeit)
I ended up removing the links because I didn’t want get into more legal trouble with Elsevier. More details here: https://news.ycombinator.com/item?id=20606362
Basically, if you post a link and know it links to illegal content, you are in violation of copyright law. If you do not know, you're not in violation, but upon being informed, you do know and have to take down the link.
In case of Sci-Hub, it would be reasonable to assume somebody who made a website with the purpose to link to the ever-changing Sci-Hub domain, that is ever changing because of copyright takedowns, then I'd think a court would find it reasonable for you to know you're linking to something violating somebody else's copyrights and thus are in violation of copyright law yourself, in EU jurisdictions at least.
You think they would realise how absurd the whole thing is when then have to resort to censorship of censorship.
For context, Handshake is a new project aiming to create a distributed certificate authority and naming system. Domain names on Handshake are very difficult to shut down, so you could have a sci hub domain on Handshake that continuously tracks the sci hub servers. Users would be able to always access sci hub from the same domain in this case.
In any case, feel free to take the source code and to deploy it elsewhere.
That said, it might be best to have Sci-Hub adopt Handshake directly? I think you might be able to get in touch with Alexandra Elbakyan, the person behind Sci-Hub, through its Twitter: https://twitter.com/Sci_Hub
It will be nicer if whole sci-hub is able to move to IPFS network and accessible through a domain. The problem is IPFS do not offer anonymity of the nodes hosting the content. So publisher can sue any of the nodes being part of the network providing a chunk of data. Even though they are not liable as intermediary, but now a days it's pretty hard to defend in court and like the default judgement in case of sci-hub will happen with the node owners.
Hopefully libp2p and IPFS, IPNS can provide some way to be performant and anonymous.
Indeed some countries like India, China, Russia will even punish intermediary for hosting such content (there is no Safe Harbor laws in these countries). In USA and other economies hiring a lawyer and getting access to legal remedy is so expensive that even with Safe harbor law, one cannot defend unless have deep pockets like google, facebook, microsoft etc.
I think you have an misconception about how sci-hub works. Most of the content you can access trough sci-hub is not hosted by sci-hub, what happens when you request a fresh new paper is that your request is eventually served by a computer in the network of one of the universities with subscription to the journal which published that paper, using donated or stolen credentials of some high profile scientists if needed. Sci-hub already uses IPFS to cache some papers, it can not move to IPFS network because it is not a library of content like LibGen (which is often confused to) but an actual hub which decides how to serve your request in real time.
(It used to cache them on LibGen, and I think it does still store them there, but I seem to recall it now uses its own servers as well.)
This research papers are anyways funded indirectly and authors of this paper are not really dependent on income from these papers, but the resulting data, experiments and experience they gained while working on it. Reading these papers is only a very small step to know something, but a necessary one.
It doesn't pro-actively replicate content, so it isn't like bit torrent where everyone is uploading.
Instead, it provides resilience, where the actual location of sci-hub could move around, but requesters could still get access to the content. The rest of the network acts to help with routing, but won't be serving data they haven't accessed themselves.
http://sciencefair-app.com/
I understand hatred for academic publishers, however this argument makes no sense.
It takes work to produce human knowledge. Writing a book, creating educational materials, and writing research all takes time, just like it takes time to build a house.
My understanding is that most / all articles in academic journals are written by researchers who are not paid by the journal. Then, the articles are peer reviewed by another set of researchers - who are also not paid by the journal. Then, those same researchers are charged by the journal if they want to have access to the article. It's very unclear what value the journal actually brings - outside of slapping a particular well known name on the publication they don't do much. All of the people that actually do the hard work receive no direct compensation.
Grants, shareware, donationware, donation subscriptions (like patreon), crowdfunding, and the list goes on and on- of all the ways information creation is compensated prior to the information being given away for free.
Copying information takes virtually no work. It's not unreasonable to hold that the concept of "intillectual property" is morally bankrupt.
Nobody here suggested creators shouldn't be compensated.
Nobody makes money off of publications except for the publishers. Some publishers even demand money from the authors for their publishing "services".
The publishers themselves add very little value, as e.g. peer reviews are done for "free" (aka on the universities or peer-reviewers dime), or rather for the prestige that comes from being a reviewer.
What's more, a lot of the authors of what's published actually want to publish their work for everybody else to see. But they are currently caught between a rock and a hard place: they need the "reputation" that comes from publishing in "prestigious" journals but those journals or rather the parent publishing companies require signing a contract saying that the article may not be published elsewhere, not even on the authors personal website.
In short, the actual research work has been paid for already, academics (authors and peer reviewers) are fucked, the public (who often paid for the research already) is fucked, and the publishers are money-printing-machines.
You could easily replace the publishers with a few non-profit orgs that are funded by universities for a couple of hundred bucks per university per month, and make all the content available for "free" to everybody on the planet in perpetuity. There are over 25,000 universities worldwide, so if every university paid e.g. 200 bucks a month, it would come out as $5M per month, which is enough to fund the publishing end of every last scientific journal in existence and then some (assuming we keep not paying authors and peer reviewers through such publishing, like it is now)
If I was the regulator in charge, the first thing I would do is require all work that is - in part - paid for by public taxpayer funds or public university funds to be published under a free license, since the public already paid for the work. Similar to e.g. works created by US government - and some other governments - is automatically in the public domain already; that's why e.g. all those nice NASA pictures are "freely" available to everybody already, because the US federal tax payers already paid about $35 bucks on average per year to fund ALL of NASA.
Deleted Comment
It's perfectly consistent to both want the work funded and want it free and easily accessible. When someone says knowledge should be free, it's unreasonable and illogical to assume they are saying that knowledge work shouldn't be funded.
The discussion of how to fund work without paywalls or ads is worth having. The straw-man argument about whether we should fund work is not.
They used to have a .onion address, which I assumed would continue to be the most reliable way in, but it's been down for a long time. I'm surprised, it seems like Tor would be the best way to remain up and accessible.
However, I still feel bad about accessing them without any benefit to the scientific teams. Is there any way to give back to the people whose work I'm reading or at least some kind of science-related general fund or charity that I could contribute to?
Giving back doesn't have to be monetary. A note like that could give someone the encouragement they need to finish an experiment or a draft.
Paying taxes is (at least in western countries where i know a tiny bit about academia) sufficient to support the authors.
If you want to go beyond that: reach out / blog / tweet / etc about papers that peek your interest. If someone else wrote about a paper of mine, I'd be over the moon!
It's complicated, but places like Wiley and Elsevier take much more than they give.
I don’t know if that makes you feel any better?
Zeit did: https://news.ycombinator.com/item?id=22412405
This is not true. Your ISP can in theory route the requested address to whatever they want. In the Netherlands TPB is blocked and my ISP returns a 200 status code which shows their blocking landing page. Other ISPs give a 30x redirect to their domain.
Basically the ISP can do a man in the middle attack.
There's still very much a distinction between (self-)publishing on the arXiv and publishing in a 'real' journal/conference within the academic community, and I think it comes down to two factors:
1. The arXiv moderation process has a much lower bar; you see some pretty rubbish papers (often from large tech companies) make their way onto the arXiv which would never be published in a 'real' journal; ultimately this isn't a shortcoming (the arXiv has had a monumental impact on academia), but rather that the arXiv isn't trying to be a peer-reviewed journal.
2. Visibility/'impact' is lower on the arXiv; there are ~14k submissions per month, so inevitably the signal to noise ratio is low.
I feel what we really need is for a few universities to put the many millions they spend on annual subscriptions into some kind of endowment to pay for proper editorial boards for a peer-reviewed arXiv instead, open access, perhaps with a token fee for submission (or a slightly higher academic affiliation bar). I think if two or three big universities from each of the US/UK/Europe suddenly made this change we would see the death of academic publishing in months.
[1] http://blog.jessriedel.com/2015/04/16/beyond-papers-gitwikxi...
[2] https://www.overleaf.com/
[3] https://arxiv.org/
[4] http://blog.jessriedel.com/2015/05/20/gitwikxiv-follow-up-a-...
[5] https://authorea.com/
[6] https://www.pubpub.org/
[7] https://github.com/pubpub
[8] https://medium.com/@samim/gitxiv-collaborative-open-computer...
[9] https://gitlab.com/publishing-reform/discussion
[10] https://polyplexus.com/
I agree with @xvilka that there is lots of room for innovation and exploration, but even in the more traditional setting there is some movement coming from the smaller realities. It seems to me that the big publisher monopoly, which at least in my field is completely unjustified, is the bigger obstacle.
[1] https://discreteanalysisjournal.com/ [2] https://www.emis.de/journals/SIGMA/ [3] https://jep.math.cnrs.fr/index.php/JEP/ [4] https://www.elibm.org/series?q=se:2204 [5] https://intlpress.com/index.php [6] https://aif.centre-mersenne.org/
Essentially it's 'Reddit + Patreon for research'.
http://asone.ai/
The large number of submissions is indeed a problem for other fields (I think mostly computer science?). I'm not sure what the best solution is for that. A voting system (where you see a mix of new and popular submissions), together with some content filtering, can help the reader. However, this would probably lead to some important papers getting burried.
That said, both are incredible resources for academics, researchers, students or even folks just wanting to read up on a subject in more depth than Wikipedia has.
Speaking of which, Scholar should really be showing DOIs on their search results. That is, the DOI should appear as a top-level clickable link from every result entry. I've tried to suggest many things like this to Scholar over the years, but to no avail... Like Google in general. Just a feature request into the void.
Deleted Comment
http://web.archive.org/web/20200225000142/https://whereissci...
So not exactly useful if this one is blocked, heh.
Or is it some joke going over my head?