Readit News logoReadit News
galaxyLogic commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
andsoitis · 4 days ago
There’s a certain amount of independence between municipalities, counties, states, and the federal government.

Except in a minority of cases (e.g. NYC), it is states and the federal government that taxes income and capital gains, and they are already not taxing citizens on the y realized value of their home.

So if one is upset about that, you have to take it up with local elections or introduce a measure with your state to prevent municipalities from levying this specific tax.

galaxyLogic · 4 days ago
Yes the idea of property taxes is NOT to tax based on increase in wealth, just the current amount of wealth. And it serves a good purpose in municipalities who have to make sure infrastructure such as roads power and sewage systems are paid somehow. More expensive houses typically require more of those.
galaxyLogic commented on If AI replaces workers, should it also pay taxes?   english.elpais.com/techno... · Posted by u/PaulHoule
brandall10 · 4 days ago
Curious, what is your solution to this situation? Imagine all labor has been automated - virtually all facets of life have been commoditized, how does the average person survive in such a society?
galaxyLogic · 4 days ago
I would go further and ask how does a person who is unable to work survive in our current society? Should we let them die of hunger? Send them to Equador? Of course not, only nazis would propose such a solution.
galaxyLogic commented on Student perceptions of AI coding assistants in learning   arxiv.org/abs/2507.22900... · Posted by u/victorbuilds
gerdesj · 20 days ago
An LLM is a tool and its just as mad as slide rules, calculators and PCs (I've seen them all although slide rules were being phased out in my youth)

Coding via prompt is simply a new form of coding.

Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.

A purist might note that using Python is not sufficiently close to the bare metal to be really productive.

My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.

I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.

They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!

galaxyLogic · 20 days ago
But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.

Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.

galaxyLogic commented on Vibe coding: What is it good for? Absolutely nothing   theregister.com/2025/11/2... · Posted by u/galaxyLogic
davydm · 23 days ago
I get your part about mentors. I came up through having to figure stuff out myself a lot via stack overflow and friends, where the biggest problem for me is usually how to ask the right question (eg with elastic search, having to find and understand "index" vs "store" - once I have those two terms, searching is a lot easier, and without them, it's a bit of a crapshoot). Mentors help here because they had to travel that road too and probably can translate from my description to the correct terms.

And I really wish I could trust an llm for that, or, indeed, any task. But I generally find answers fall into one of these useless buckets: 1. Reword the question as an answer (so common, so useless) 2. Trivial solutions that are correct - meaning one or two lines that are valid, but that I could have easily written myself quicker than getting an agent involved, and without the other detractors on this list 3. Wildly incorrect "solutions". I'm talking about code that doesn't even build because the llm can't take proper direction on which version of the library to refer to, so it keeps giving results based off old information that is no longer relevant. Try resolving a webpack 5 issue - you'll get a lot of webpack 4 answers and none of them will work, even if you specify webpack 5 4. The absolute worst: subtly incorrect solutions that seem correct and are confidently presented as correct. This has been my experience with basically every "oh wow, look what the llm can do" demo. I'm that annoying person who finds the big mid-demo.

The problems are: 1. A person inexperienced in the domain will flounder for ages trying out crap that doesn't work and understanding nothing of it. 2. A person experienced in the domain will spend a reasonable amount of time correcting the llm - and personally, I'd much rather write my own code via tdd-driven emergent design - I'll understand it, and it will be proven to work when it's done.

I see that proponents of the tech often gloss over this and don't realise that they're actually spending more time overall, especially when having to polish out all the bugs. Or maintain the system.

Use whatever you want, but I've got zero confidence in the models, and I prefer to write code instead of gambling. But to each, their own.

galaxyLogic · 22 days ago
The way I see AI coding-agents at the moment is they are interns. You wouldn't give an intern responsibility for the whole project. You need an experienced developer who COULD do the job with some help from interns, but now the AI can be the intern.

There's an old saying "Fire is a good servant, but bad master". I think same applies to AI. In "vibe-coding" AI is too much the master.

galaxyLogic commented on Migrating the main Zig repository from GitHub to Codeberg   ziglang.org/news/migratin... · Posted by u/todsacerdoti
galaxyLogic · 23 days ago
What's the main diff between the different repos? I would think whoever keeps the repo free of malicious code is the best. A big player like GH should have an advantage on that. Also not intentional malicious code uploads, but vulnerable code should be detected and reported to tyhe submitters.
galaxyLogic commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
galaxyLogic · a month ago
If I use this does it mean Google has access to all my code and it may popup as"AI generated" in someone else's code?
galaxyLogic commented on Spec-Driven Development: The Waterfall Strikes Back   marmelab.com/blog/2025/11... · Posted by u/vinhnx
thomascountz · a month ago
I just tried an experiment using Spec-Kit from GitHub to build a CLI tool. Perhaps the scope of the tool doesn't align itself with Spec-Driven Development, but I found the many many hours—tweaking, asking, correcting, analyzing, adapting, refining, reshaping, etc—before getting to see any code challenging. As would be the case with Waterfall today, the lack of iterative end-to-end feedback is foreign and frustrating to me.

After Claude finally produced a significant amount of code, and after realizing it hadn't built the right thing, I was back to the drawing board to find out what language in the spec had led it astray. Never mind digging through the code at this point; it would be just as good to start again than to try to onboard myself to the 1000s of lines of code it had built... and I suppose the point is to ignore the code as "implementation detail" anyway.

Just to make clear: I love writing code with an LLM, be it for brainstorming, research, or implementation. I often write—and have it output—small markdown notes and plans for it to ground itself. I think I just found this experience with SDD quite heavy-handed and the workflow unwieldy.

galaxyLogic · a month ago
I think the challenge is how to create a small but evolvable spec.

What LLMs bring to the picture is that "spec" is high-level coding. In normal coding you start by writing small functions then verify that they work. Similarly LLMs should perhaps be given small specs to start with, then add more functions/features to the spec incrementally. Would that work?

galaxyLogic commented on LLMs are steroids for your Dunning-Kruger   bytesauna.com/post/dunnin... · Posted by u/gridentio
galaxyLogic · a month ago
> LLMs should not be seen as knowledge engines but as confidence engines.

The thing I like best about LLM is when I ask question about some technical problem, and it tells that it is a KNOWN problem. It thus gives me confiidence that I don't need to spend time) to look for solution where there is no good soloution. Just go around it somehow. It let's me know I'm not the only person with this problem. And that way it gives me confidence that I'm not stupid, the problem is a real problem.

As an example I was working with WebStorm and tried to find a way to make the Threads-tab the default tab shown when debugger opens. AI told me there is no way it knows about. Good, problem solved, solved by finding out there is no solution.

u/galaxyLogic

KarmaCake day5111May 7, 2013View Original