Readit News logoReadit News
user3939382 · 3 years ago
Let’s keep iterating on the takedown evasion strategies until they’re impenetrable. It’s the only hope the People have of actually being in control of anything important.
SergeAx · 3 years ago
TOR version of Z-Library was up and running all that time, including auth. So what we should evaluate is easir-to-access takedown evasion strategies.
account42 · 3 years ago
TOR is already pretty easy to access via TOR browser so anything else that requires additional client software is probably not going to have better adoption.
RupertEisenhart · 3 years ago
if you use the brave browser, TOR is always a keyboard shortcut away
unsupp0rted · 3 years ago
> It’s the only hope the People have of actually being in control of anything important.

Well I wouldn't go that far. There are more $5 wrenches than there are people.

https://xkcd.com/538/

user3939382 · 3 years ago
Yep, aka rubber-hose cryptanalysis.

The real test here was Assange who embarrassed the U.S. military by publishing drone footage of them killing civilians not to mention everything else.

They got him on an individual level (IMHO by blatantly discarding any remaining vestigial pretense of abiding by the law) but-- the site is up.

kobalsky · 3 years ago
that comic did a huge disservice to computer security and still being thrown around after years.

there's a huge difference in knowing that your data has been compromised (you hand out the keys to avoid torture) and not knowing, that alone justifies every hoop you have to jump around to have your data encrypted.

besides, it doesn't apply here since onion services were created specifically to host content anonimously, they don't know who to torture.

xboxnolifes · 3 years ago
You can make systems that even the creator cannot take down.
DennisP · 3 years ago
That only works if you can track down who the people are.

Deleted Comment

pradn · 3 years ago
Current shadow libraries (zlib, libgen, scihub) suffer from centralized data hosting and opaque librarians/custodians (who modify metadata and gate inclusion/exclusion of content). We already have the tools to solve this.

1. Files are stored in a distributed fashion and referred to via their content hash. We already have IPFS for this.

2. Library metadata can be packaged up into a SQLite DB file. The DB would contain IPFS hashes, book names, authors, etc.

3. Teams of volunteers assemble and publish the library metadata DB files. There can be multiple teams, each with their own policies. The latest library files can be published via RSS. Each team can have their own upload portal.

4. A desktop app can pull multiple RSS feeds for multiple libraries. The libraries can be combined together and be searched easily on the client side. Users can search for content via the latest library metadata files, locally on their desktop. Content can be downloaded via IPFS.

5. The desktop app can also double as an IPFS host, allowing users to choose specific files to pin or simply allocate an amount of space for the purpose (100 GB, etc). There could also be servers that aggregate pinning info to make sure no gaps are there.

5. For ease of access, people can run websites that preclude the need to setup your own desktop app / download libraries.

6. Library teams can publish metadata DBs and content via torrents, too, for long-term/disaster-recovery/archival purposes.

This would be a true hydra. No one centralized team, no reliance on DNS. If one team's library set up goes down, you can use another's.

dark-star · 3 years ago
1. yes, ipfs could solve that, but it relies on people hoting content. Previous examples of content-based addressing showed that little-accessed content tends to disappear as nodes go offline over the years. This would need to be solved, and I think the only way to solve it is to have a battery of centralized ipfs servers mirroring each other, which defeats the "fully distributed" setup

2. this would also need to be hosted and could be taken down. You'd need to mirror this too, but that's a simpler problem to solve (gigabytes instead of terabytes)

3. the upload portals and the RSS feeds would, again, be centralized or would have to change so regularly that they become impractical

in the end you would end up with a dozen (a hundred? more?) different z-libraries, which would make it actually worse from a preservation standpoint, since only the most popular content would be shared, libraries that focused on rare/exotic/fringe material would be endangered of being lost since they have fewer volunteers/mirrors/seeds/...

Also, freenet and other projects already showed that end-users allocating some storage and using that to spread data around is not an easy problem, the fluctuation in end-nodes is so big that it slows down the entire network to a crawl. I'm not sure this problem has been solved yet.

pradn · 3 years ago
1. You're absolutely right that IPFS alone isn't a good way to guarantee durability. I think you'd need a second level of archiving done in bulk. Thousands of books collated into various torrents that people can help seed. This has already been done for LibGen. IPFS does provide a common hash space for everyone to rally around, and does make it easy to download single books. It's also easy to speed up downloads by using an IPFS gateway - the actual protocol is slow. I don't expect most users to actually pin files or anything.

2. Well, users and librarians need some way to find each other. That's true in any system. And that communication medium can allow certain kinds of attacks. (website on the public internet, word-of-mouth, Telegram groups) If all someone needs is an IPFS hash of a recent library metadata DB (a SQLite file), any way of communication will suffice. I think this approach allows for centralization (sure, keep a website up as long a the authorities don't care) but also gracefully allows for all manner of de-centralization (use any of the above methods to distribute the metadata DB).

3. Any many-to-one system with curation (librarians) will have weak points. The idea is you can set up upload portals across any communication medium (a regular website, a dark-net site, a Telegram group, email) - and the libraries take care of collating the information. The social grouping is what matters more (libraries vs uploaders vs downloaders) - and we want to make it tech agnostic and, therefore, more resilient.

This system will be stable I think, for two reasons:

1. Network and branding effects will naturally create a few big libraries. People will use the familiar, useful libraries. See how few torrent search sites took up the bulk of traffic, back in the heyday of torrents. Most users will probably use a website, and the ones that are easiest to use will probably get the most traffic.

2. The resilience of the system is necessary only once in a while. A set of libraries will emerge, there'll be enforcement actions and they might break apart, and then new ones will pick up their pieces (easily bc the metadata and content is open). So we want to provide the open-ness for this to actually happen.

yamtaddle · 3 years ago
The core problem with IPFS and friends is that the vast majority of Web access these days occurs on battery-powered devices that need to sleep their connections often and for long periods or battery life will plummet. End users aren't going to accept even 10% worse battery life (and it'd likely be worse than that) just so they can participate in "the swarm" and have higher latency and slower transfer on all their requests.
__MatrixMan__ · 3 years ago
You could keep the metadata DB in IPFS too, along with the books. The only thing that needs to be regularly republished is the CID of the metadata DB, which fits in a tweet.

In fact, it fits in a tweet with enough room for steganographic deniability hijinks. You could publish a browser plugin which transforms tweets into IPFS CIDs according to a non-cryptographic hash algorithm. That way the cleartext of your tweet is not takedown-worthy, nor is the plugin, but the two together let users transform the seemingly innocuous tweet into the metadata DB update they need.

pradn · 3 years ago
That's a great idea, and also amusing. :)

It's amazing that we can refer to data in a globally-unique way with small content-based hashes. Hash collisions aren't usually worth worrying about.

Another benefit is that its easy to store large numbers of hashes with basic metadata.

SHA-256 hashes are 32 bytes. If it takes 512 bytes on average to store author/title/publish-date/ISBN, then the hash is a small part of the total per item (though not well-compressible.) You can store the info for 2 million books in a megabyte.

Shadow librarians can also publish curated collections of books. I know a guy who tried to do this in a systematic way for college-level history textbooks covering a wide swathe of the world's history. The entire catalog with metadata and hashes is probably only a few hundred thousand KB.

rsync · 3 years ago
… taking this a step further… you could place the message you speak of tweeting into an ‘oh by’ code[1] … and then just chalk it onto the sidewalk.

Now passers-by can receive the message, time shifted, without the Internet.

[1] 0x.co

pradn · 3 years ago
Actually, you can do another cool thing, too. The IPFS ecosystem has IPNS, which generates a public/private-key-back hash-name, which can point to another IPFS hash. So this way, a shadow librarian group can have one IPNS hash that always points to the latest catalog.
shaunsingh0207 · 3 years ago
https://annas-archive.org does a lot of this, and has been my go-to for books for a while now.
pradn · 3 years ago
Yes! Anna has been active in helping duplicate Zlib, among other efforts. No one person can do it all. We need to pitch in!
superpirate · 3 years ago
It has already done, you will be glad to learn:

https://bafybeigpp6mtsmjngaqscfkjwzivbptt4ui5yb7uih6qe43obof...

For me it has worked in Brave, Chrome, Firefox and Safari Tech Preview, with or without installed IPFS and IPFS Companion (with IPFS it works much better). Haven't worker in Safari in IOS and non-Chrome browsers on Android.

Not very fast and sometimes it requires page reloading but overall impression is awesome.

pradn · 3 years ago
Thank you! This looks promising!
mich_mechanic · 3 years ago
Guys, you already have this one named Nexus. Even for torrents.

https://bafybeigpp6mtsmjngaqscfkjwzivbptt4ui5yb7uih6qe43obof...

You must learn about them.

ornornor · 3 years ago
An alternative that works today is #bookz on undernet.
mikewarot · 3 years ago
Domain names turned out to be a weak point susceptible to attack by the statists. To route around this weakness, an array of names is used.

However, there is still the matter of having an account to get to these names. Which was the original reason the statists went after them in the first place. The users themselves will thus become the next target, just like in the days of Napster.

fudgefactorfive · 3 years ago
To me that was the real strength in IPv6. (I know I know innefficient protocol with complex upgrade path lead to near negligible adoption)

NAT "fixed" the problem of address exhaustion, but it killed the old internet. You cannot run your own network anymore. In the "old" times, I gave you a phone number or IP address and that's it, direct connection. All anyone could do was show up and take the computer to stop that. Sure there's a phone company or ISP involved, but they just powered the pump, you completely controlled what went through it.

Now I can't do that. They ran out of addresses and I share an address with X unknown others. So I can't give you a home address, just to a bank of doors. I could give you an apartment number, but that's also shifting transparently, so num X to you is num Y to someone else.

IPv6 would have solved the problem of exhaustion while preserving the right to an address. I could be some number permanently and you could reliably find a connection to my system using it. In that world I could set up a private DNS service in my house no one can alter without physically plugging in. Then have that store records to other addresses. Every part of that chain requires someone finding you and showing up at your door to disrupt.

Instead now I have to pay digital ocean 5 bucks to keep an address for me so anything can find me via them. A bunch of servers in my home effectively an island without a coordinate until DO points me out on request. Like having all mail addresses be to the local town hall for them to forward to me. Sure maybe you trust your local town hall, but they are fundamentally beholden to someone else.

With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP. Which requires nation-state levels of mobilization just to block an address, with fallout affecting literally thousands of others. They'd have to nuke a block to suppress any site, only for that site to find another address and be back to normal within minutes. Instead they do a WHOIS, send a scary email and boom, you're unknown, unfindable and disconnected. Hoping that word of mouth brings people to your new "address" exactly like losing your phone (and SIM) while abroad.

I know it sucks as a protocol but v6 to me is a massive extremely important development that would change how the internet, and from that all communication, works.

conradev · 3 years ago
> With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP.

Private individuals have access to IPv4 blocks and maintain their own soverign networks. That fact doesn't change the reality that most people most of the time pay a network operator (ISP, Telecom) to operate their network. Network operators aren't going anywhere, and these network operators still maintain full control over how packets transit their network. In the case of WWAN networks, they will also know roughly where you are.

All IPv6 does is expand the address space and put the price of an address within reach of anyone... but it doesn't change the knowledge or hardware required to run your own network.

mindslight · 3 years ago
IP addresses are just a different type of name, and also assigned by hierarchical entities. NAT isn't the issue, rather it's the incumbent power structures gradually tightening the identity/control screws. If you have a public IP on your physical connection and use that for banned publishing, they go after the account holder listed for the physical connection, which eventually gets back to you - the same as if you obtain that public IP from Digital Ocean or a tunnel broker.

The only way around that is using naming systems that don't rely on centralized authorities, or at least can't be coerced by governments.

dark-star · 3 years ago
> With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP. Which requires nation-state levels of mobilization just to block an address, with fallout affecting literally thousands of others.

This is not how it works. Taking down a single IPv6 IP address (or whole AS) is a very simple thing and is done daily to combat spam and DDoS attacks, without requiring "nation-state levels of mobilization" (whatever that means). Also there is essentially no "fallout" at all in IPv6, and there isn't any fallout in IPv4, too, since BGP routes can be as specific as a single host

scarmig · 3 years ago
Can't they just send a scary email to the AS administrator who then removes the offending address block from its routing tables? Or are you imagining folks migrating to ones that don't respond to such requests?
icedchai · 3 years ago
Even if you have your own IP block, ASN, are set up with multiple BGP peers/upstreams, they can always go to those upstreams and have you filtered/blocked. IPv6 is cheap and plentiful, that’s all.
sitzkrieg · 3 years ago
your isp is sharing an IP with other customers? i have never, ever seen that in 3 countries worth of residential isps and doubt its possible and want to make sure its true (and concerning)
irrational · 3 years ago
I tried it. The url I got looked like guid.domain.net. At first I was thinking that the guid part must be unique for every user, but then the domain.net part is still susceptible to being seized. So… without being able to compare the url I got with other people, I’m left wondering how this actually works.
BHSPitMonkey · 3 years ago
FTA:

"The domain names in question are subdomains of newly registered TLDs that rely on different domain name registries."

There are multiple TLDs/SLDs involved (and the pool will likely grow over time)

dkjaudyeqooe · 3 years ago
> domain.net part is still susceptible to being seized

Yes it is, but how do you discover the domains? There could be just a few hundred users per domain. Then you have to expend substantial effort to seize each domain.

Meanwhile any affected user just moves to their second domain. Even if the authorities got much better at taking down domains the only issue would be increasing the number of extra domains per user.

I can't see how the authorities can beat this.

stevenhuang · 3 years ago
I had to use https://singlelogin.me/ for it to generate the special domain, so can't the central https://singlelogin.me/ domain be seized at some point?
webmaven · 3 years ago
> I tried it. The url I got looked like guid.domain.net. At first I was thinking that the guid part must be unique for every user, but then the domain.net part is still susceptible to being seized.

The GUID seems to be unique per-user, but also per ccTLD (mine are GUID.DOMAIN.cz and OTHERGUID.DOMAIN.ph).

I would guess that the pool of registered DOMAIN.ccTLD will grow faster than they can be blocked or seized, that new user per-domain GUIDs can be issued on demand onto different registered domains, and that there is an unused reserve of registered domains ready for deployment.

Deleted Comment

oseityphelysiol · 3 years ago
How did they take it down the domain last time? Was it by picking on the registrar?

In my country they so this by asking all the ISPs to block the domain from their DNS servers. This works for 90% of the population, but all you have to do is just change the DNS server to something other than what the ISP gives you and you’re good to go.

Also, I just don’t get how current approach is any better. As far as I understand, there’s still a single point of failure, i.e. the site you get your “personal” domain from.

EarlKing · 3 years ago
I'm honestly shocked no one has openly laughed themselves silly at the idea of "personalized domains" for a site openly engaging in piracy... because surely that wouldn't be a way to build a stronger case against individual users engaged in piracy, riiiiiight?
hellotomyrars · 3 years ago
Depends. If the database that contains those domains and the account they are linked to is obtained by whatever law enforcement agency gets them, than sure. But the network of torrent piracy detection doesn’t work for this situation so it requires that database to be seized. It is also possible the information is not stored after it is processed. Because copyright trolls can’t get the information like they do with torrents either, they can’t easily send bullshit threatening notices to end users, and it is extremely doubtful that the FBI is going to provide linking information for individual prosecution. I don’t believe there is any example of that, though I’d be interested to hear otherwise.
braingenious · 3 years ago
I have heard of people using throwaway emails and VPNs for this sort of stuff!
jocaal · 3 years ago
you can get the personal domain from tor and then use the domain on the regular net.
voldacar · 3 years ago
How are they able to afford so many domain names? And what stops the state from just asking the domain registrar for the details of who purchased the domains? Besides, there are only so many domain registrars in the world, eventually you will lose the ability to purchase new domains

And if the actual web server is behind a service like cloudflare, the state can just ask cloudflare for the IP of the real server, then ask the datacenter who owns the server at IP x ...

Run_DOS_Run · 3 years ago
>what stops the state from just asking the domain registrar for the details of who purchased the domains

Some domain registrars don't ask for your personal data and the registrars who ask for it won't verify them.

>then ask the datacenter who owns the server at IP x ...

Many datacenters in China and Russia doesn't care about some warez and if the zlib staff pays over Tor with cryptocurrencies the datacenter also don't know who rents the server

x-desire · 3 years ago
Users don't actually get a personalized domain for each account, only a subdomain.
Steltek · 3 years ago
That title is straight out of a cyberpunk novel.
aborsy · 3 years ago
I don’t know what’s the solution to this problem, but this library is extremely useful. Most of the time, you don’t want to read the books entirely, but mostly to check something, read a section or browse see if what you’re looking for is there.

Eventually, you may buy a book that you know is worth it. Right now even the table of contents may not be available before buying.