Readit News logoReadit News
cttet commented on Typed languages are better suited for vibecoding   solmaz.io/typed-languages... · Posted by u/hosolmaz
NischalM · a month ago
I have found this to be true as well. Although I exclusively used python and R at work and tried CC several times for small side projects, it always seemed to have problems and ended up in a loop trying to fix its own errors. CC seems much better at vibe coding with typescript. I went from no knowledge of node.js development to deploying reasonable web app on vercel in a few days. Asking CC to run tsc after changes helps it fix any errors because of the faster feedback from the type system compared to python. Granted this was only for a personal side project and may not be true for production systems that might be much larger, I was pleasantly surprised how easy it was in typescript compared to python
cttet · a month ago
It may be a Claude specific thing. I tried to ask Claude to various tasks in machine learning, like implement gradient boosting without specifying the language, thinking it will use Python since it is the most common option and have utilities like Numpy to make it much easier. But Claude mostly choose Javascript for the language and somehow managed to do it in JS.
cttet commented on François Chollet: The Arc Prize and How We Get to AGI [video]   youtube.com/watch?v=5QcCe... · Posted by u/sandslash
saberience · 2 months ago
The Arc prize/benchmark is a terrible judge of whether we got to AGI.

If we assume that humans have "general intelligence", we would assume all humans could ace Arc... but they can't. Try asking your average person, i.e. supermarket workers, gas station attendants etc to do the Arc puzzles, they will do poorly, especially on the newer ones, but AI has to do perfectly to prove they have general intelligence? (not trying to throw shade here but the reality is this test is more like an IQ test than an AGI test).

Arc is a great example of AI researchers moving the goal posts for what we consider intelligent.

Let's get real, Claude Opus is smarter than 99% of people right now, and I would trust its decision making over 99% of people I know in most situations, except perhaps emotion driven ones.

Arc agi benchmark is just a gimmick. Also, since it's a visual test and the current models are text based it's actually a rigged (against the AI models) test anyway, since their datasets were completely text based.

Basically, it's a test of some kind, but it doesn't mean quite as much as Chollet thinks it means.

cttet · 2 months ago
Maybe it is a cultural difference aspect, but I feel that "supermarket workers, gas station attendants" (in an Asian country) that I know of should be quite capable of most ARC tasks.
cttet commented on François Chollet: The Arc Prize and How We Get to AGI [video]   youtube.com/watch?v=5QcCe... · Posted by u/sandslash
qoez · 2 months ago
I feel like I'm the only one who isn't convinced getting a high score on the ARC eval test means we have AGI. It's mostly about pattern matching (and some of it ambiguous even for humans what the actual true response aught to be). It's like how in humans there's lots of different 'types' of intelligence, and just overfitting on IQ tests doesn't in my mind convince me a person is actually that smart.
cttet · 2 months ago
The point is not that having a high score -> AGI, their ideas are more of having a low score -> we don't have AGI yet.
cttet commented on NoProp: Training neural networks without back-propagation or forward-propagation   arxiv.org/abs/2503.24322... · Posted by u/belleville
gwern · 5 months ago
https://www.reddit.com/r/MachineLearning/comments/1jsft3c/r_...

I'm still not quite sure how to think of this. Maybe as being like unrolling a diffusion model, the equivalent of BPTT for RNNs?

cttet · 5 months ago
In all their experiments, backprop is used for most of their parameter though...
cttet commented on Stop using the elbow criterion for k-means   arxiv.org/abs/2212.12189... · Posted by u/Anon84
joshdavham · 5 months ago
Since when did researchers decide to start titling their papers like clickbait YouTube videos?

The “Stop doing [conventional thing]!” title formula is, for whatever reason, the title that I always find the most annoying.

cttet · 5 months ago
Since a long time ago, and I found a trend that in more theoretical fields, the more 'clickbaity' the titles are, e.g. TCS, machine learning theory, etc.
cttet commented on Ask HN: Is Clean Code a waste of time?    · Posted by u/01-_-
cttet · 6 months ago
Never read his books. But I found good architectures is very helpful for LLM-assisted coding, if I keep things nicely named and decoupled.
cttet commented on Fire-Flyer File System (3FS)   github.com/deepseek-ai/3F... · Posted by u/wenyuanyu
do_not_redeem · 6 months ago
Can someone convince me this isn't NIH syndrome? Why would you use this instead of SeaweedFS, Ceph, or MinIO?
cttet · 6 months ago
If NIH syndrome boosts morale of the team, it should be helpful on overall team progress though.
cttet commented on Show HN: Marmite – Zero-config static site generator   github.com/rochacbruno/ma... · Posted by u/rochacbruno
apitman · 10 months ago
The fact people still add "written in Rust" to HN submissions is almost as funny as how effective it is, which itself is almost as funny as the fact that people like me can't stop commenting on it.
cttet · 10 months ago
> people like me can't stop commenting on it

Yeah this loop will go on with this comment, it seems.

cttet commented on Google Scholar PDF Reader   scholar.googleblog.com/20... · Posted by u/gerroo
ildon · a year ago
I suggest you try new devices to read papers. Often the perception that paper is a better support is due to a lack of more convenient devices. Paper is better than a 15'' screen for sure, for many reasons including size and posture while reading. But have you tried larger screens (> 27''), large tablets (>= A4) or as large as possible E-Ink readers? Depending on your preferences, you might find that some of these work actually better than paper also for you :-)
cttet · a year ago
Paper enables non-vision based(scroll bar) random access to content, when you keep going back an forth between two pages, it is very annoying on all current devices except paper. Vision pro/VR/AR or a particular multi-screen set-up can achieve that, but so far all alternatives are not as good.
cttet commented on Show HN: Personal Knowledge Base Visualization   github.com/raphaelsty/kno... · Posted by u/raphaelty
Nuzzerino · a year ago
Just my two cents: If you're saving enough documents to where you need something like this, you're spending too much time bookmarking and not enough time actually making use of the knowledge contained in there. I'm sure it's an improvement for those people though.

Would be even better if AI systems were integrated with hypergraphs of the sort, which was an approach some AGI projects were taking 1-2 decades ago.

cttet · a year ago
For me bookmarking does not mean that I want to read the article, it usually mean that I think it may be useful for certain task in the future want it to be in my search result some times. Since sometimes I want to read in depth about something that I read briefly long ago, it is really hard for me to find it back.

u/cttet

KarmaCake day172December 10, 2014View Original