The older construction is made of very big stones of hard granite, that fit perfectly together. Assuming they had some concrete, it is easy how they were able to make them fit so perfectly. If you have a source of materials, concrete is not difficult to make. See https://www.geopolymer.org/
People were not stupid, and technologies were invented and forgotten. And just like Roman technologies were lost in the middle ages, this building technology was lost to the Incas.
The Incas build their houses and temples on top of the existing ones. They used smaller stones that did not fit well together. Still a great culture, but with different technologies.
South America has a lot of cultures that disappeared. They had no written history and a lot of stuff was destroyed by later cultures (including the Spanish). So it is impossible for historians to get it right.
For example there were also people with elongated skulls and red hair in Peru. Could be a result of inbreeding as they also had some other physiological differences. Maybe exterminated by another tribe. https://www.youtube.com/watch?v=5dfpLN3FbQs
History is often full with conflicts, but presented as if it is all known. There are often conflicts with engineers who point out different technologies used for buildings and such. These technologies do not fit in the simplified timeline of mainstream history.
This difference in technology is obvious regarding the extremely accurate Egyptian granite vases https://www.youtube.com/watch?v=7BlmFKSGBzI and granite boxes.
The problem is that the components are often connected to different interfaces/graphs. Components can never be fully separated due to debug, visualization and storage requirements.
In non-OOP systems the interfaces are closed or absent, so you get huge debug, visualization and storage functions that do everything. On addition to the other functionality. And these functions need to be updated for each different type of data. The complexity moves to a different part. But most importantly, any new type requires changes to many functions. This affects a team and well tested code. If your product is used by different companies with different requirements (different data types), your functions become overly complex.
But actually everything is merely waves and fields.
There's going to be a time where humans finally reconcile the quantum with the newtonian -- and I can't wait for that day
There is also evidence that "photons" are just thresholds in the material that is used to detect light. The atoms vibrate with the EM-wave and at a certain threshold they switch to a higher vibration state that can release an electron. If the starting state is random, the release of an electron will often coincide with the light that is transmitted from just one atom.
This threshold means that one "photon" can cause zero or multiple detections. This was tested by Eric Reiter in many experiments and he saw that this variation indeed happens. Especially when the experiment is tuned to reveal this. By using high frequency light for example. It happens also in experiments done by others, but they disregarded the zero or multiple detections as noise. I think the double detection effect was discovered when he worked in the laboratory with ultraviolet light.
Here is a paper about Eric Reiter's work: https://progress-in-physics.com/2014/PP-37-06.PDF And here is his book. https://drive.google.com/file/d/1BlY5IeTNdu1X6pRA5dnJvRq3ip6...
Another option is Erlang. On the top level it is organized with micro-services instead of functions.
None of them are system languages. The old hardware had weird data and memory formats. With C a lot of assembler could be avoided to program this hardware. It came as a default with Unix and some other operating systems. Fortran and Pascal were kind of similar.
The most used default languages on most systems were for interpreters. So you got LISP and BASIC. There is no fast hardware for that. To get stuff fast, one needed to program assembler, unless there was a C-compiler available.
All people are biased. It's impossible to also avoid bias needed to filter out the firehose of data.
What your describing is often a form of moderation.
> Different opinions do matter. But due to the algorithms, the most emotional responses are promoted. There is no way to promote facts or what people think are facts.
This is tuneable. We have tuned the algos for engagement, and folks engage more with stuff they emotionally react to.
People could learn to be less emotionally unstable.
> So most discussion will be extremely emotional and not based on facts and their value. This is even true in scientific discussions.
I think your over fitting. Moderation drives a lot of how folks behave in a community.
> Combined with group-think, these emotions can grow and lead to catastrophic outcomes.
Group think is also how we determined mamales are mamales and the earth isn't the center of the universe. Sometimes a consensus is required.
There will be a bias in moderation, but that will have less of an effect when there is no deletion. If possible, the user could choose their preferred style (or bias) of moderation. If you want full freedom, you can let users select "super-users" to moderate/categorize for them.
Emotional responses and troll jokes could be a separate categories as long they do not call for violence and or break other laws.
Consensus is still group-think. I think it is destructive without any clear view where it stands within other options or other ideas. Like: "why exactly is earth not the center". A lot of consensus is also artificial due to biased reporting, biased censorship and biased sponsorship. During discussions, people within a consensus tend to use logical fallacies. Like portraying the opposition as idiots, or avoiding any valid points that the opposition bring into the discussion.
I think that people have becomes less intelligent due to one-sided reporting of information. With extra information, people will become smarter and more understanding of how other (smart) people think.
I think it needs another item in the list: For any theory/ hypothesis: how well does it stand against the null-hypothesis? For example: How much physical evidence is there really for the string-theory?
And I would upgrade this one: If there’s a chain of physical evidence (was argument), every link in the chain must work (including the premise) — not just most of them
And when breaking these items do not mean that something is false. It means that the arguments and evidence is incomplete. Don't jump to conclusions when you think that the arguments or evidence is invalid (that is how some people even think that the moonlanding was a hoax).