Readit News logoReadit News
fschuett commented on Go is still not good   blog.habets.se/2025/07/Go... · Posted by u/ustad
fschuett · 9 days ago
Technically, the term "billion dollar mistake", coined in 1965, would now be a "10 billion dollar mistake" in 2025. Or, if the cost is measured in terms of housing, it would be a "21 billion dollar mistake".

:^/

fschuett commented on Launch HN: Issen (YC F24) – Personal AI language tutor    · Posted by u/mariano54
fschuett · 2 months ago
Please add Latin
fschuett commented on Why is the Rust compiler so slow?   sharnoff.io/blog/why-rust... · Posted by u/Bogdanp
fschuett · 2 months ago
For deploying Rust servers, I use Spin WASM functions[1], so no Docker / Kubernetes is necessary. Not affiliated with them, just saying. I just build the final WASM binary and then the rest is managed by the runtime.

Sadly, the compile time is just as bad, but I think in this case the allocator is the biggest culprit, since disabling optimization will degrade run-time performance. The Rust team should maybe look into shipping their own bundled allocator, "native" allocators are highly unpredictable.

[^1]: https://www.fermyon.com

fschuett commented on Canon Law Ninja   canonlaw.ninja/... · Posted by u/danielam
fschuett · 3 months ago
Would be great to have English translations for the 1917 Code (esp. the canons regarding "what is a marriage", ... ). Right now I use Gemini for searching through the Latin text, it "works", but it would still be a decent value-add. Just in case the maintainer sees this. Also, semantic matching of 1983-1917 canons would be nice.
fschuett commented on By default, Signal doesn't recall   signal.org/blog/signal-do... · Posted by u/feross
fschuett · 3 months ago
> “Take a screenshot every few seconds” legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like “How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?”

No, actual AI is smarter than Microsoft managers, it seems:

Here are some ideas for adding an arbitrary AI feature to your operating system quickly to make investors happy:

- AI File Search: NLP for file/setting search (search files by NLP querying)

- Auto Window Layouts: AI-suggested window organization ("coding mode", "research mode" depending on detected usage patterns)

- Smart Notifications: automatic notification condensing to reduce clutter

- AI Clipboard: Keeping a categorized clipboard paste based on content

- Predictive App Launcher: Suggests apps based on daytime, usage, recently opened files

- AI Wallpaper/Theme: Smart visual suggestions, i.e. wallpaper based on current weather, mood, etc.

- Voice Quick Commands: AI-based voice OS control ("Open browser")

- AI System optimization: for example, content-based disk space cleanup

Any of the above are better than this nonsense.

fschuett commented on OlmOCR: Open-source tool to extract plain text from PDFs   olmocr.allenai.org/... · Posted by u/eamag
fschuett · 6 months ago
Very impressive, it's the only AI Vision toolkit so far that actually recognizes Latin and medieval scripts. I've been trying to somehow translate public-domain medieval books (including the artwork and original layout) to PDF, so they can be re-printed, i.e pages like this: https://i.imgur.com/YLuF9sa.png - I tried a Google Vision + o1 solution, which did work to some extent, but not on the first try. This even recognizes the "E" of the artwork initial (or fixes it because of the context), which many OCR or AI solutions fail at.

The only think I'd need now is a way to get the original font and artwork positions (would be a great addition to OlmOCR). Potentially I could work up a solution to create the font manually (as most medieval books are written in the same writing style), then find the shape of the glyphs in the original image once I have the text and then mask out the artwork with some OpenCV magic.

fschuett commented on What if Eye...?   eyes.mit.edu/... · Posted by u/smusamashah
fschuett · 7 months ago
In order for this to work in real life, you'd have to prove a lot of other invariants:

- The mechanism to interpret the light data signal has to be in-step with the evolution of the eye. Getting light data without a brain evolving at the same time to interpret it is evolutionary recessive, i.e. a useless function. I.e. a real evolution would be more like "cat /dev/urandom > output.html", not a controlled ecosystem with a clear penalty-reward system.

- In nature, there is no 1:1 "reward / selection function" like in this simulation. In the computer, this "motivation factor" is externally given, so that the next generation is rewarded and selected out, in reality, there is no rule as to what is and isn't "better" or "fitter" or "more attractive to the other gender" (not like CS nerds would know). Sure, an organism can consume food, but beyond a certain point that wouldn't make the organism just "fat", not stronger. So there also need to be environmental mutations happening at the same time, that reinforce "more food = better evolved".

- There has to be a way for the animal to be so dominant, that the connection between light data and food can be genetically passed on and will not be associated with bad artifacts (see ChatGPT hallucinations for examples of "accidental bad artifacts in evolution" - and that "evolution" has millions of man-hours, money and R&D behind it).

- By the rule of "survival of the fittest", the next generation mutation has to be (in one single step) such a significant improvement over the last one that it won't be selected out again by recessive selection or dilution inside of the gene pool.

- The gene has to be active within 150 subsequent generations, without fail, cancer, recession and provide 150 times a dominant advantage, just to get a basic "eye" for 2D navigation with 10 light sensors. The minimum snail eye (pre-Cambrian) has 14.000 cells [1] (and a snail cannot see color).

- The real world is a 3D environment, which adds a monumental amount of complexity. Add to it the complexity of depth, color, shape, ...

- The mutation(s) have to happen either "at once" or be widespread (otherwise it's going to be like an Albino animal, i.e. some rare neutral mutation).

- All of this has to be done in an environment hostile to life in general (i.e. the edge of underwater vulcanoes, some primordial soup burning at several hundred degrees), all elements have to be at the right place, at the same time, etc. And be created out of nothing, of course.

While I do agree that it can be helpful for computer vision, computerized "evolution" is just adaptive statistical pattern matching, but it's absolutely nothing like real biology. It would be more realistic to just output "/dev/random > kernel-gen-xxx.iso" and then run it bare-metal, with no lab environment, no operating system, no programming language, no goal function, no selection / reward process, no debugging, etc.

Even Darwin had his problems with the eye. The reason I believe in God is not necessarily because I want to, but because evolution (not survival-of-the-fittest, but the "mutation creates information" aspect) requires far more faith and far more dogmas, which cannot be questioned for the sake of science. When I was in 8th grade biology, I took a stone from the schoolyard, put it on the teachers desk and said "alright, so this is a human if we wait 4 billion years". The teacher ignored me, but never told me I'm wrong.

[1] https://link.springer.com/article/10.1007/BF00606433

fschuett commented on Cerebrum: Simulate and infer synaptic connectivity in large-scale brain networks   svbrain.xyz/2024/12/20/ce... · Posted by u/notallm
fschuett · 8 months ago
Simulating a brain would mean that reason, the ability to discern good from bad, is a statistical process. All scientific evidence so far shows that this is not the case, since AIs do not have the ability to "understand" what they're doing, their input data has to be classified first to be usable to the machine. Especially the problem of model collapse shows this, when an AI is trained on the output of another AI, trained on the output of another AI, it will eventually produce garbage, why? Because it doesn't "understand" what it's doing, it just matches patterns. The only way to correct it is with hundreds or even thousands of employees that give meaning to the data to guide the model.

Consciousness presumes the ability of making conscious decisions, especially the ability to have introspection and more importantly, free will (otherwise the decision would not be conscious, but robotic regurgitation), to reflect and to judge on the "goodness" or "badness" of decisions, the "morality". Since it is evident that humans do not always do the logical best thing (look around you how many people make garbage decisions), a machine can never function like a human can, it can never have opinions (that aren't pre-trained input), as it makes no distinction between good and bad without external input. A machine has no free will, which is a requirement for consciousness. At best, it can be a good faksimile. It can be useful, yes, but it cannot make conscious decisions.

The created cannot be bigger than the creator in terms of informational content, otherwise you'd create a supernatural "ghost" in the machine. I hope I don't have to explain why I consider creating ghosts unachievable. Even with photo or video AIs, there is no "new" content, just rehashed old content which is a subset of the trained data (why AI-generated photos often have this kind of "smooth" look to them). The only reason the output of AI has any meaning to us is because we give it meaning, not the computer.

So, before wasting millions of compute hours on this project, I'd first try to hire and indebted millennial who will be glad to finally put his philosophy degree to good use.

fschuett commented on Show HN: Adventures in OCR   blog.medusis.com/38_Adven... · Posted by u/bambax
fschuett · 8 months ago
> After these experiments, it's clear some human review is needed for the text, including spelling fixes and footnote placement.

I just use ChatGPT for spelling fixes (i.e. when rewriting articles). You just have to instruct it to NOT auto-rephrase the article.

fschuett commented on ChatGPT Pro   openai.com/index/introduc... · Posted by u/meetpateltech
ta_1138 · 9 months ago
There are many use cases for which the price can go even higher. I look at recent interactions with people that were working at an interview mill: Multiple people in a boiler room interviewing for companies all day long, with a computer set up so that our audio was being piped to o1. They had a reasonable prompt to remove many chatbot-ism, and make it provide answers that seem people-like: We were 100% interviewing the o1 model. The operator said basically nothing, in both technical and behavioral interviews.

A company making money off of this kind of scheme would be happy to pay $200 a seat for an unlimited license. And I would not be surprised if there were many other very profitable use cases that make $200 per month seem like a bargain.

fschuett · 9 months ago
If any company wants me to be interviewed by AI to represent the client, I'll consider it ethical to let an AI represent me. Then AIs can interview AIs, maybe that'll get me the job. I have strong flashbacks to the movie "Surrogates" for some reason.

u/fschuett

KarmaCake day68March 19, 2022View Original