Sadly, the compile time is just as bad, but I think in this case the allocator is the biggest culprit, since disabling optimization will degrade run-time performance. The Rust team should maybe look into shipping their own bundled allocator, "native" allocators are highly unpredictable.
[^1]: https://www.fermyon.com
No, actual AI is smarter than Microsoft managers, it seems:
Here are some ideas for adding an arbitrary AI feature to your operating system quickly to make investors happy:
- AI File Search: NLP for file/setting search (search files by NLP querying)
- Auto Window Layouts: AI-suggested window organization ("coding mode", "research mode" depending on detected usage patterns)
- Smart Notifications: automatic notification condensing to reduce clutter
- AI Clipboard: Keeping a categorized clipboard paste based on content
- Predictive App Launcher: Suggests apps based on daytime, usage, recently opened files
- AI Wallpaper/Theme: Smart visual suggestions, i.e. wallpaper based on current weather, mood, etc.
- Voice Quick Commands: AI-based voice OS control ("Open browser")
- AI System optimization: for example, content-based disk space cleanup
Any of the above are better than this nonsense.
The only think I'd need now is a way to get the original font and artwork positions (would be a great addition to OlmOCR). Potentially I could work up a solution to create the font manually (as most medieval books are written in the same writing style), then find the shape of the glyphs in the original image once I have the text and then mask out the artwork with some OpenCV magic.
- The mechanism to interpret the light data signal has to be in-step with the evolution of the eye. Getting light data without a brain evolving at the same time to interpret it is evolutionary recessive, i.e. a useless function. I.e. a real evolution would be more like "cat /dev/urandom > output.html", not a controlled ecosystem with a clear penalty-reward system.
- In nature, there is no 1:1 "reward / selection function" like in this simulation. In the computer, this "motivation factor" is externally given, so that the next generation is rewarded and selected out, in reality, there is no rule as to what is and isn't "better" or "fitter" or "more attractive to the other gender" (not like CS nerds would know). Sure, an organism can consume food, but beyond a certain point that wouldn't make the organism just "fat", not stronger. So there also need to be environmental mutations happening at the same time, that reinforce "more food = better evolved".
- There has to be a way for the animal to be so dominant, that the connection between light data and food can be genetically passed on and will not be associated with bad artifacts (see ChatGPT hallucinations for examples of "accidental bad artifacts in evolution" - and that "evolution" has millions of man-hours, money and R&D behind it).
- By the rule of "survival of the fittest", the next generation mutation has to be (in one single step) such a significant improvement over the last one that it won't be selected out again by recessive selection or dilution inside of the gene pool.
- The gene has to be active within 150 subsequent generations, without fail, cancer, recession and provide 150 times a dominant advantage, just to get a basic "eye" for 2D navigation with 10 light sensors. The minimum snail eye (pre-Cambrian) has 14.000 cells [1] (and a snail cannot see color).
- The real world is a 3D environment, which adds a monumental amount of complexity. Add to it the complexity of depth, color, shape, ...
- The mutation(s) have to happen either "at once" or be widespread (otherwise it's going to be like an Albino animal, i.e. some rare neutral mutation).
- All of this has to be done in an environment hostile to life in general (i.e. the edge of underwater vulcanoes, some primordial soup burning at several hundred degrees), all elements have to be at the right place, at the same time, etc. And be created out of nothing, of course.
While I do agree that it can be helpful for computer vision, computerized "evolution" is just adaptive statistical pattern matching, but it's absolutely nothing like real biology. It would be more realistic to just output "/dev/random > kernel-gen-xxx.iso" and then run it bare-metal, with no lab environment, no operating system, no programming language, no goal function, no selection / reward process, no debugging, etc.
Even Darwin had his problems with the eye. The reason I believe in God is not necessarily because I want to, but because evolution (not survival-of-the-fittest, but the "mutation creates information" aspect) requires far more faith and far more dogmas, which cannot be questioned for the sake of science. When I was in 8th grade biology, I took a stone from the schoolyard, put it on the teachers desk and said "alright, so this is a human if we wait 4 billion years". The teacher ignored me, but never told me I'm wrong.
Consciousness presumes the ability of making conscious decisions, especially the ability to have introspection and more importantly, free will (otherwise the decision would not be conscious, but robotic regurgitation), to reflect and to judge on the "goodness" or "badness" of decisions, the "morality". Since it is evident that humans do not always do the logical best thing (look around you how many people make garbage decisions), a machine can never function like a human can, it can never have opinions (that aren't pre-trained input), as it makes no distinction between good and bad without external input. A machine has no free will, which is a requirement for consciousness. At best, it can be a good faksimile. It can be useful, yes, but it cannot make conscious decisions.
The created cannot be bigger than the creator in terms of informational content, otherwise you'd create a supernatural "ghost" in the machine. I hope I don't have to explain why I consider creating ghosts unachievable. Even with photo or video AIs, there is no "new" content, just rehashed old content which is a subset of the trained data (why AI-generated photos often have this kind of "smooth" look to them). The only reason the output of AI has any meaning to us is because we give it meaning, not the computer.
So, before wasting millions of compute hours on this project, I'd first try to hire and indebted millennial who will be glad to finally put his philosophy degree to good use.
I just use ChatGPT for spelling fixes (i.e. when rewriting articles). You just have to instruct it to NOT auto-rephrase the article.
A company making money off of this kind of scheme would be happy to pay $200 a seat for an unlimited license. And I would not be surprised if there were many other very profitable use cases that make $200 per month seem like a bargain.
:^/