Most online communities start democratic but eventually become oligarchies. Reddit has upvotes but moderators have absolute power. Discord servers are basically digital dictatorships.
I've been wondering: Is it possible to build a truly democratic tech community?
Key challenges: - How do you prevent mob rule while maintaining democracy? - Should all voices be equal, or should expertise/contribution matter? - How do you handle spam/quality without authoritarian moderation?
Curious if anyone has seen successful examples, or has thoughts on what the key principles should be.
(Context: Currently researching governance models for a developer platform)
We're experimenting with something similar - a "Star" system where users earn influence through contributions and can spend it on governance decisions. Early results suggest contribution-based voting leads to much more thoughtful decisions.
How well does Discourse handle controversial governance choices? I'm curious if trust levels work when communities face difficult decisions.
- Use a boilerplate rules list like "no spam, no personal attacks, no hate speech, don't be obnoxious, etc." (but more specific, e.g https://www.statsoc.org.au/Forum-rules or https://macrumors.zendesk.com/hc/en-us/articles/201265337-Fo...). Have a "no politics" rule unless you want culture warring on your platform for whatever reason.
- Then you enforce the rules as you see fit. The final rule should be "moderators have discretion" and if someone is pushing against the rules and/or irritating others, ban them. At the same time, be lenient and give second and third chances at least for non-blatant offenses; escalate, first with a warning, then with a temp ban, then a longer temp ban, etc. It's a careful balance, but if done correctly, your form will be both tolerant and not dominated by assholes.
- To prevent spam and banned users opening new accounts, either: make the form invite-only (where users can invite others) and screen applicants; charge a small fee for signing up; require flexible proof of identity (e.g. one of: phone #, Google account, GitHub account, Facebook account, etc.); require new user posts to be approved by moderators before they show up; or something else. This will make it significantly harder for your userbase to grow, and many people will refuse to sign up, but it will be hard and some people won't sign up anyways.
- If your forum grows enough, you can recruit moderators. You need enough people and activity to select moderators who you trust, because every time you override their moderation or kick them beyond "very rarely", you look worse (more incompetent, power-tripping, incoherent) and the overall community "vibes" become slightly more toxic.
The idea is more like an internet utopia - if the participants are engaged and high-quality enough, maybe traditional moderation becomes unnecessary? We're not trying to scale to millions of users anyway.Have you seen small, closed communities work without formal moderation? Or does human nature always require some kind of enforcement mechanism?
I'm curious about the cold start dilemma though: invite-only is great for quality, but creates a chicken-and-egg problem for early adoption. Do you think it's worth the slower growth from day one?
Also, for initial promotion - better to avoid platforms like Twitter (where average user quality is lower) and focus on higher-quality channels, even if reach is smaller?
Would love your thoughts on balancing growth vs quality in early stages.
https://en.wikipedia.org/wiki/Eternal_September
As the real problem. I don’t tend to believe in natural hierarchies, but I do believe in this one
https://en.wikipedia.org/wiki/Diffusion_of_innovations
That in some sense that early adopters are better than other people and that things start out cool and deteriorate and one way to counter that when the party gets too big you start a new party and get the early adopters to come along. I would point out this essay
https://www.jofreeman.com/joreen/tyranny.htm
And say that I think her analysis is right factually but I take the opposite position that the ‘structureless’ organization she describes is capable of activism that more sustainable groups just can’t do and say form that kind of organization when you can and know it isn’t going to last.
Sustainability is non-profit speak for ‘profitability’ and if you value that an organization because Oxfam or the ACLU or the Mozilla Foundation and suffers from the corruption of
https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
I’d say ‘benign despotism’ is alright for an organization where you’ve got the right to exit
https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty
But due process, democracy and all that are necessary for when you don’t have exit.
Jurgen Habermas wrote a ponderous 2 volume book
https://en.wikipedia.org/wiki/The_Theory_of_Communicative_Ac...
which pursues the idea of a perfect deliberative process which one some hand seems closer because of widespread electronic communications yet our experience with things like Twitter makes it seem terribly naive between (1) people not acting in good faith and (2) others believing that people are not acting in good faith.
We're actually launching a developer platform called GistFans that experiments with these governance questions - contribution-based voting, transparent processes, etc.
Would love your perspective on our approach - happy to share details if you're interested.
Your analysis of Habermas and "perfect deliberative process" is exactly what we're grappling with in GistFans. The tension you identify between early adopter quality and scalability, the corruption of sustainable organizations vs the power of "structureless" activism - these are the core contradictions we're trying to navigate.
We're experimenting with a "Stars" system where users earn influence through contributions, then spend these stars on governance decisions. The hypothesis: when participation has real cost (earned influence), people might act more thoughtfully - potentially addressing both the good faith problem AND the "believing others act in bad faith" issue you mention about Twitter.
But your point about "benign despotism with exit rights" is fascinating. Maybe the key isn't eliminating hierarchy but making it transparent and merit-based rather than arbitrary?
We're deliberately staying small and experimental rather than chasing sustainability/growth. Better to run genuine experiments that inform future builders than create another corrupted institution.
Have you seen any examples where contribution-based influence actually improved deliberative quality? Or do the fundamental human nature issues make this unsolvable through design?
The "retaliation equilibrium" working initially is fascinating - shows that peer accountability can work at small scale. But the faction coordination problem seems almost inevitable as communities grow.
Makes me think the key might be preventing large factions from forming in the first place, rather than trying to make democracy work despite them.
In fact, we’ve seen people who try to assert authority join the community, but they usually don’t last long. They naturally drift away because they don’t truly understand the value of long-term commitment and authentic relationships.
If you're interested, we've written a formal proof that explains how the structure prevents gatekeeping:
https://github.com/contribution-protocol/contribution-protoc...