I'm so tired of this being repeated as if founders choose ISOs to screw over employees, it's flat wrong. ISOs offer the best tax advantages for employees. The 90-day exercise window is a government-imposed thing. If you want it changed, go talk to them.
I'd much rather give an employee an instrument in which they don't have to worry about any taxes at all until they actually want to exercise than have them sign a document they almost certainly don't understand and receive stock they get immediately taxed for and be confused why the IRS taxed them for stock they can't sell and that might end up being worthless.
I've observed many startup founders who are disdainful of employees who leave, ever, for any reason, and definitely don't want them to receive proceeds from any of the company's future successes.
1. How does this system deal with the "data withholding" problem? In other words, when people provide "storage power" their data will be repeatedly sampled to make sure it is available... but when an entity claims that samples aren't being provided as required by the protocol, how does the system determine that that person wasn't lying, if the sampled data is still provided correctly in a followup request? If the answer is "through arbitration", what prevent the arbitration system from being DDOSed?
2. The "verified clients" are certified by "a decentralized network of verifiers". How does this system prevent a sibyl attack, i.e how does it prevent verifiers from repeatedly verifying themselves using multiple accounts?
3. I notice this system doesn't mention the use of erasure coding, which is usually a common feature of similar schemes by other projects. Why is it that erasure coding isn't necessary in this system? In other words, if data is randomly sampled, how does a client make sure 0.001% of the data isn't missing if only 99.999% or less of the data has been sampled so far?
4. The filecoin organization has a ton of funds due to their successful ICO. This makes it hard for users of the filecoin network to know if it is truly scalable (since the filecoin org could just run a bunch of anonymous server farms with their funds that provide free storage to paper over flaws in the cryptoeconomic incentives) How can a user of filecoin get some assurance that the files they are storing aren't just sitting on a server run by the filecoin organization & are truly running on a decentralized system functioning through the specified cryptoeconomic mechanism?
> How does this system deal with the "data withholding" problem? In other words, when people provide "storage power" their data will be repeatedly sampled to make sure it is available... but when an entity claims that samples aren't being provided as required by the protocol, how does the system determine that that person wasn't lying, if the sampled data is still provided correctly in a followup request?
Filecoin sort of splits this problem into two parts – "data withholding" from Filecoin's proof-of-spacetime consensus mechanism (a "storage fault" in Filecoin terminology, yes I know there's a lot of new terminology here!), and "data withholding" from a client that's requesting stored data.
Storage miners are required to prove to the network itself, not to any specific challenger entity, that they're storing files. Each storage miner is (basically) randomly challenged once per [short interval] to provide a compressed cryptographic proof in response to a challenge. The proof conclusively confirms that, during that period, the miner was storing the data being they'd previously promised to store. You can ctrl-f "if a miner goes offline" in the linked post for a surface-level description of how the network deals with storage faults. Ditching the data and recovering it later is economically irrational for pretty involved reasons – basically, recovery is more expensive than just storing the data over the short-ish intervals during which faults are recoverable.
When it comes to "withholding data" from clients – retrieval on Filecoin is just a market-based system for bandwidth. The solution to holding data "hostage," i.e. refusing to serve it at reasonable prices, is to store a few replicated copies (just like centralized storage services do for you today behind the scenes). There's really no upside to miners refusing to profitably serve you a file when they know or suspect you can get it from another source.
> The "verified clients" are certified by "a decentralized network of verifiers". How does this system prevent a sibyl attack, i.e how does it prevent verifiers from repeatedly verifying themselves using multiple accounts?
The short answer here – with apologies for the brevity; details forthcoming – is that verified data isn't meant to be scarce, and some degree of over-verification is expected. There will be a decentralized group of folks responsible for (quite permissively) verifying and renewing clients for fixed amounts of data, and declining to renew allocations for clients who seem to be abusively verifying data. We're optimistic that this will dramatically decrease the rate at which "fake" data is stored and (most importantly) ensure that there's always storage available for client data.
> Why is it that erasure coding isn't necessary in this system?
Basically: cool, novel cryptography! In particular, this is where proofs-of-replication and proofs-of-spacetime kick in. Check out this podcast with Juan to learn much more: https://filecoin.io/blog/filecoin-proof-system/
(Also – if you like erasure coding, it is totally compatible with Filecoin whether you're a miner or a client! I would be surprised if this feature isn't developed by the community in Filecoin's early days.)
> How can a user of filecoin get some assurance that the files they are storing aren't just sitting on a server run by the filecoin organization?
Really fair question. First and foremost, as a client, you get to choose your storage miner if you want to. You then have to solve another problem, of course, which is how to map a Filecoin peer ID to a real-world actor (or prove that it's not being run by Filecoin, or whatever). This is solvable in a bunch of different ways, which I won't get into here, but the high-level takeaway is that you're not just throwing your data at an undifferentiated storage interface with obscure inner workings.
More fundamentally – Filecoin is part of a huge ecosystem of open source projects. Transparency is a key value – highlighting the success of the community, including the many decentralized storage miners participating in Filecoin, is really important to us and the only way the network can succeed. You can hop on our Slack any time (https://join.slack.com/t/filecoinproject/shared_invite/zt-dj...) to chat with the many folks already building on Filecoin. If you have other ideas on how we can establish that there are lots of groups operating on the network, not just us, let us know and I'll see what we can do :)
And don’t get me wrong, I like React. And the projects have used it on have impressed clients beyond what we could achieve with just using backend HTML templates. But for a landing page that you’re driving paid traffic to? Not a good use case at all.
Using Next with Preact instead of React makes the total gzipped JS bundle around 20kb. Takes no time to set up.
If you’ll only ever build one website, sure, use basic tools. If you can amortize the cost of learning the toolchain across many projects, use better tools! (I’m not saying that’s only React by any means.)
yes, existing government systems are insanely complex - that’s part of the problem! the essential complexity is not higher than that of a brain-computer interface, or an interplanetary rocket.
we don’t even know what these kids’ mandate is (also disappointing). but if your general premise is “smart outsiders who are good at engineering are always the wrong people to rework complex, inefficient systems,” i’d like to think you’re on the wrong site.