Coding via prompt is simply a new form of coding.
Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.
A purist might note that using Python is not sufficiently close to the bare metal to be really productive.
My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.
I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.
They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!
Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.
And I really wish I could trust an llm for that, or, indeed, any task. But I generally find answers fall into one of these useless buckets: 1. Reword the question as an answer (so common, so useless) 2. Trivial solutions that are correct - meaning one or two lines that are valid, but that I could have easily written myself quicker than getting an agent involved, and without the other detractors on this list 3. Wildly incorrect "solutions". I'm talking about code that doesn't even build because the llm can't take proper direction on which version of the library to refer to, so it keeps giving results based off old information that is no longer relevant. Try resolving a webpack 5 issue - you'll get a lot of webpack 4 answers and none of them will work, even if you specify webpack 5 4. The absolute worst: subtly incorrect solutions that seem correct and are confidently presented as correct. This has been my experience with basically every "oh wow, look what the llm can do" demo. I'm that annoying person who finds the big mid-demo.
The problems are: 1. A person inexperienced in the domain will flounder for ages trying out crap that doesn't work and understanding nothing of it. 2. A person experienced in the domain will spend a reasonable amount of time correcting the llm - and personally, I'd much rather write my own code via tdd-driven emergent design - I'll understand it, and it will be proven to work when it's done.
I see that proponents of the tech often gloss over this and don't realise that they're actually spending more time overall, especially when having to polish out all the bugs. Or maintain the system.
Use whatever you want, but I've got zero confidence in the models, and I prefer to write code instead of gambling. But to each, their own.
There's an old saying "Fire is a good servant, but bad master". I think same applies to AI. In "vibe-coding" AI is too much the master.
After Claude finally produced a significant amount of code, and after realizing it hadn't built the right thing, I was back to the drawing board to find out what language in the spec had led it astray. Never mind digging through the code at this point; it would be just as good to start again than to try to onboard myself to the 1000s of lines of code it had built... and I suppose the point is to ignore the code as "implementation detail" anyway.
Just to make clear: I love writing code with an LLM, be it for brainstorming, research, or implementation. I often write—and have it output—small markdown notes and plans for it to ground itself. I think I just found this experience with SDD quite heavy-handed and the workflow unwieldy.
What LLMs bring to the picture is that "spec" is high-level coding. In normal coding you start by writing small functions then verify that they work. Similarly LLMs should perhaps be given small specs to start with, then add more functions/features to the spec incrementally. Would that work?
The thing I like best about LLM is when I ask question about some technical problem, and it tells that it is a KNOWN problem. It thus gives me confiidence that I don't need to spend time) to look for solution where there is no good soloution. Just go around it somehow. It let's me know I'm not the only person with this problem. And that way it gives me confidence that I'm not stupid, the problem is a real problem.
As an example I was working with WebStorm and tried to find a way to make the Threads-tab the default tab shown when debugger opens. AI told me there is no way it knows about. Good, problem solved, solved by finding out there is no solution.
Except in a minority of cases (e.g. NYC), it is states and the federal government that taxes income and capital gains, and they are already not taxing citizens on the y realized value of their home.
So if one is upset about that, you have to take it up with local elections or introduce a measure with your state to prevent municipalities from levying this specific tax.