Readit News logoReadit News
x1f604 commented on 0 A.D. Release 28: Boiorix   play0ad.com/new-release-0... · Posted by u/jonbaer
rand846633 · 18 days ago
My experience is the opposite: 0ad will lag on my laptop once thing become big. BAR will warn that it’s not compatible with my low end intel integrated potato gpu, but it works just fine..
x1f604 · 18 days ago
BAR runs fine on low end CPUs...until you have like 2,000 units on speed metal
x1f604 commented on 0 A.D. Release 28: Boiorix   play0ad.com/new-release-0... · Posted by u/jonbaer
decimalenough · 18 days ago
For me the main problem with 0AD multiplayer is that if any player loses their connection even for a moment for any reason, the game either halts completely or forks so that they can't rejoin. Quite frustrating, especially for longer campaigns. It's also impossible to save and restore in multiplayer.
x1f604 · 18 days ago
This is one of the problems that BAR solves beautifully - a player could leave and rejoin later and the game would continue running just fine. An existing player can choose to take their stuff or not, or take it and give it back when the player rejoins. Truly elegant.
x1f604 commented on 0 A.D. Release 28: Boiorix   play0ad.com/new-release-0... · Posted by u/jonbaer
ddtaylor · 18 days ago
0ad is a fun game but the last few times I have tried to play it with my friends it lagged very bad once a few units were moving around. I actually was able to get it to play kind of normal by hacking the pathfinding code to give up after a fixed iteration count that was low. It worked kind of, but broke path finding a lot, obviously.

The crux of the issue is that their simulation is single threaded. It's a complicated problem to do both deterministic and multi-threaded, but I feel some of us could help them.

x1f604 · 18 days ago
I feel like the issue is more that their pathing algorithm is very inefficient. Not sure why using multiple cores would solve the problem if the cause of the lag is that their pathing algorithm is cubic time or something
x1f604 commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
HarHarVeryFunny · 7 months ago
I think the gist of TFA is just that we need a new architecture/etc not scaling.

I suppose one can argue about whether designing a new AGI-capable architecture and learning algorithm(s) is a matter of engineering (applying what we already know) or research, but I think saying we need new scientific discoveries is going to far.

Neural nets seems to be the right technology, and we've now amassed a ton of knowledge and intuition about what neural nets can do and how to design with them. If there was any doubt, then LLMs, even if not brain-like, have proved the power of prediction as a learning technique - intelligence essentially is just successful prediction.

It seems pretty obvious that the rough requirements for an neural-net architecture for AGI are going to be something like our own neocortex and thalamo-cortical loop - something that learns to predict based on sensory feedback and prediction failure, including looping and working memory. Built-in "traits" like curiosity (prediction failure => focus) and boredom will be needed so that this sort of autonomous AGI puts itself into leaning situations and is capable of true innovation.

The major piece to be designed/discovered isn't so much the architecture as the incremental learning algorithm, and I think if someone like Google-DeepMind focused their money, talent and resources on this then they could fairly easily get something that worked and could then be refined.

Demis Hassabis has recently given an estimate of human-level AGI in 5 years, but has indicated that a pre-trained(?) LLM may still be one component of it, so not clear exactly what they are trying to build in that time frame. Having a built-in LLM is likely to prove to be a mistake where the bitter lesson applies - better to build something capable of self-learning and just let it learn.

x1f604 · 7 months ago
> Demis Hassabis has recently given an estimate of human-level AGI in 5 years

He said 50% chance of AGI in 5 years.

x1f604 commented on Working with Files Is Hard (2019)   danluu.com/deconstruct-fi... · Posted by u/nathan_phoenix
yuboyt · a year ago
You're missing the point. GP was mentioning the common assumption that all systems in the last 30 years are sector-atomic under power loss condition. Either the sector is fully written or fully not written. Optane was a rare counter example, where sector can become partially written, thus not sector-atomic.
x1f604 · a year ago
It is not rare for flash storage devices to lose data on power loss, even data that is FLUSH'd. See https://news.ycombinator.com/item?id=38371307

There are known cases where power loss during a write can corrupt previously written data (data at rest). This is not some rare occurrence. This is why enterprise flash storage devices have power loss protection.

See also: https://serverfault.com/questions/923971/is-there-a-way-to-p...

x1f604 commented on Pentaborane(9)   en.wikipedia.org/wiki/Pen... · Posted by u/rbanffy
GuB-42 · 2 years ago
This made me think about the excellent "things I won't work with" series by Derek Lowe.

https://www.science.org/topic/blog-category/things-i-wont-wo...

He didn't write write an article about pentaborane, he left this one to Max Gergel https://www.science.org/content/blog-post/max-gergel-s-memoi...

x1f604 · 2 years ago
From the book:

(Warning: Spoilers ahead)

> The next day I told Parry that I was flattered but would not make pentaborane. He was affable, showed no surprise, no disappointment, just produced a list of names, most of which had been crossed off; ours was close to the bottom. He crossed us off and drove off in his little auto leaving for Gittman's, or perhaps, another victim. Later I heard that he visited two more candidates who displayed equal lack of interest and the following Spring the Navy put up its own plant, which blew up with considerable loss of life. The story did not make the press.

x1f604 commented on Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))   1f6042.blogspot.com/2024/... · Posted by u/x1f604
GrumpySloth · 2 years ago
But then it uses AVX instructions. (You can replace -march=znver1 with just -mavx.)

When AVX isn’t enabled, the std::min + std::max example still uses fewer instructions. Looks like a random register allocation failure.

x1f604 · 2 years ago
I don't think it's a register allocation failure but is in fact necessitated by the ABI requirement (calling convention) for the first parameter to be in xmm0 and the return value to also be placed into xmm0.

So when you have an algorithm like clamp which requires v to be "preserved" throughout the computation you can't overwrite xmm0 with the first instruction, basically you need to "save" and "restore" it which means an extra instruction.

I'm not sure why this causes the extra assembly to be generated in the "realistic" code example though. See https://godbolt.org/z/hd44KjMMn

x1f604 commented on Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))   1f6042.blogspot.com/2024/... · Posted by u/x1f604
cmovq · 2 years ago
Thanks for sharing. I don't know if the C++ standard mandates one behavior or another, it really depends on how you want clamp to behave if the value is NaN. std::clamp returns NaN, while the reverse order returns the min value.
x1f604 · 2 years ago
Based on my reading of cppreference, it is required to return negative zero when you do std::clamp(-0.0f, +0.0f, +0.0f) because when v compares equal to lo and hi the function is required to return v, which the official std::clamp does but my incorrect clamp doesn't.
x1f604 commented on Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))   1f6042.blogspot.com/2024/... · Posted by u/x1f604
jeffbee · 2 years ago
Clang generates the shortest of these if you target sandybridge, or x86-64-v3, or later. The real article that's buried in this article is that compilers target k8-generic unless you tell them otherwise, and the features and cost model of opteron are obsolete.

Always specify your target.

x1f604 · 2 years ago
Even with -march=x86-64-v4 at -O3 the compiler still generates fewer lines of assembly for the incorrect clamp compared to the correct clamp for this "realistic" code:

https://godbolt.org/z/hd44KjMMn

x1f604 commented on Std: Clamp generates less efficient assembly than std:min(max,std:max(min,v))   1f6042.blogspot.com/2024/... · Posted by u/x1f604
tambre · 2 years ago
Both recent GCC and Clang are able to generate the most optimal version for std::clamp() if you add something like -march=znver1, even at -O1 [0]. Interesting!

[0] https://godbolt.org/z/YsMMo7Kjz

x1f604 · 2 years ago
Even with -march=znver1 at -O3 the compiler still generates fewer lines of assembly for the incorrect clamp compared to the correct clamp for this "realistic" code:

https://godbolt.org/z/WMKbeq5TY

u/x1f604

KarmaCake day204November 28, 2021View Original