It sounds like your community has figured out the first kind of power somewhat, but I urge you to consider carefully the second form of power.
It's likely that your community, like many, is currently run by its founders, people who are successful leaders because they enforce rules that people want to follow anyway--people have voted them into power with their feet (if they didn't like you as a leader, they would leave).
But what happens to the community when you leave or die? Often a member of the community steps up to take the reins, and while that person might understand the goals of the organization, they might not understand how to implement those goals, especially when implementing those goals requires setting aside their own basic human urges and ego. Almost no organization survives the first few changes of leadership in a positive form. You may think you're okay with this, that your organization can end with you, but keep in mind that it may live on and cause more damage than it ever did good.
The solution is to create rules which limit your own power and give the community the ability to enforce those rules on you. That way your community has the power to survive a transition of power.
This is the problem for me right now. I don't have a plan for replacing the social contact of facebook (and facebook isn't giving me anywhere close to what I need). I'm also struggling with depression right now and pretty socially withdrawn. As soon as the current blues pass and I'm able to come up with a real life third place [1], I hope to start limiting how much time I spend there and eventually quit altogether.
[1] From the article - https://en.wikipedia.org/wiki/Third_place
When I started a small software company, I originally had a similar understanding. I subconsciously thought that the software should be priced to pay for its development cost, plus some profit. I was not very successful, until I realized that the product should be priced as a percentage of the value it delivered to the customer. Customers don't want you to do a lot of work, they just want their problem solved for a price that is reasonable relative to the benefit of the solution.
This idea reverses several of your conclusions:
1. The product should be priced based on how much value it delivers, and only those companies that can deliver the product for significantly less than that price will stay in business. Once a company finds a need that people will pay for, it generally makes sense to drive the cost of production down while maintaining the same benefit, thus maximizing profit.
2. The more value a company creates with the resources it uses (the greater its margin), the more left-over resources it will have to invest in producing still more benefits, or to return profits to its original investors.
So, going back to the cost of digital content, I am happy to pay for it, as long as I end up feeling the movie was worth watching for the price. And if they can produce great content without many resources (or with lower cost of delivery), all the better. My problem, right now, is that there is so much great content I cannot ever hope to watch it. But that is not a problem that really bothers me, I am happy to keep paying to have a long list of shows I'd like to watch, if I could just find the time.
I can't agree with that, and it's key to my point. There are some problems which we just don't know how to make embarrassingly parallel. Take something classic like mesh triangulation or mesh refinement. Nobody knows how to make those embarrassingly parallel. If you write it in Erlang, it's still not going to be embarrassingly parallel. And it won't scale linearly to N times faster on N cores no matter which language you write it in.
So it's just not true to say that any Erlang program should scale linearly. If nobody on earth knows how to make mesh refinement scale linearly, how will Erlang do it?
Maybe you mean you wouldn't choose to write those programs in Erlang? Well then I think it's a meaningless claim to say Erlang will linearly scale your program, but only if it is a program which is naturally linearly scalable anyway. Erlang hasn't helped you do anything there so why make a claim about it?
>Your Erlang program should just run N times faster on an N core processor
No, it won't. It will only be true for tasks that /could/ be (completely/embarrasingly) parallel (as you say). Which is kind of circular.
No, I'm not claiming Erlang breaks Amdahl's Law. I'm claiming that Amdahl's Law applies less often than people think it does.
OK. Not an Erlang user, but I can't let this statement go.
I've studied parallel numerical algorithms. Many/most of them will involve blocking because you're waiting for results from other nodes.
If you're saying Erlang has somehow found a way to do those numerical algorithms without having to wait, then I'd love to see all those textbooks rewritten.
Amdahl's Law reigns supreme.