Nvidia are disabling optimisations on their own hardware. The motivation appears to be related to these optimisations being unsafe to apply to general code.
I wouldn’t call it incredibly impressive. The path on how to write a minimal multi-tasking kernel has been beaten decades ago.
Writing a kernel that can boot and do a few things is ‘just’ a matter of being somewhat smart and have some perseverance. Doing it for RISC-V complicates things a bit compared to x86, but there, too, the information about initialising the hardware often is easily obtained (for example: https://wiki.osdev.org/RISC-V_Meaty_Skeleton_with_QEMU_virt_... I wouldn’t know whether this project used that)
I think the author agrees, given (FTA) that they wrote “This is a redo of an exercise I did for my undergraduate course in operating systems”
It’s work, may be nice work, but I think everybody with a degree in software engineering could do this. Yes, the OS likely will have bugs, certainly will have rough edges, but getting something that can multi-process, with processes shielded from each other by a MMU isn’t hard anymore.
It's the only chip manufacturer "left" in the US. The argument is national security: the US expects China to invade Taiwan and this will kill TSMC in the process.
Whether this will happen or not can be debated, but this is what the government expects.
Rather than inferring from how you imagine the architecture working, you can look at examples and counterexamples to see what capabilities they have.
One misconception is that predicting the next word means there is no internal idea on the word after next. The simple disproof of this is that models put 'an' instead of 'a' ahead of words beginning with vowels. It would be quite easy to detect (and exploit) behaviour that decided to use a vowel word just because it somewhat arbitrarily used an 'an'.
Models predict the next word, but they don't just predict the next word. They generate a great deal of internal information in service of that goal. Placing limits on their abilities by assuming the output they express is the sum total of what they have done is a mistake. The output probability is not what it thinks, it is a reduction of what it thinks.
One of Andrej Karpathy's recent videos talked about how researchers showed that models do have an internal sense of not knowing the answer, but fine tuning on question answering I'd not give them the ability to express that knowledge. Finding information the model did and didn't know then fine tuning to say I don't know for cases where it had no information allowed the model to generalise and express "I don't know"
I think we should be a bit more aware about the impact of ordering everything through Amazon. Not only regarding delivery, but also the message it sends to local stores.
In all your examples -
1) Yes. It was a good thing
2) Yes. It is now a thing done to learn how to draw, and a niche skill
3) Yes, yes, yes.
IF people are bemoaning the devaluing of certain activity, yup it’s true. It happens. There are fewer horses than there were yesterday.
Certain forms of activity get devalued. They are replaced by an alternative that creates surplus. But life goes on to bigger things.
The same with GenAI. Content is increasingly easy to create at scale. This reduced cost of production applies for both useful content and pollution.
Except if finding valid information is made harder, - then life becomes more complex and we don’t go on to bigger and better things.
The abundance of fabricated content which is indistinguishable from authentic content means that authentic content is devalued, and that any content consumed must now wait before it is verified.
It increases the cost of trusting information, which reduces the overall value of the network. It’s like the cost of lemons for used cars.
This is the looming problem. Hopefully something appears that mitigates the worst case scenarios, however the medium case and even bad case are well and truly alive.
In fact most c++ developers believe that throwing an exception in a noexcept function is undefined behavior. It is not. The behavior is defined to call std::terminate. Which would lead one to ask how does it know to call that. Because noexcept functions have a hidden try catch to see if it should call it. The result is that noexcept can hurt performance, which is surprising behavior. C++ is just complicated.