It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.
I filled out the PDF using FireFox PDF-editor, at which point it occurred to me, this is not so different from using an application which has a form for me to enter data into it.
Maybe in a few years Government has a portal where I can submit any of their forms as PDF documents, and they would probably use AI to store the contents of the form into a database.
A PDF-form is kind of a Universal API, especially when AI can extract and validate the data from it. Of all the API-formats I've seen I think PDF-forms is the most human-friendly. Each "API" is defined by the form-identifier in the PDF-form. It is easy for humans to use, and pretty easy for office-clerks to create such forms, especially with the help of AI. I wonder will this, or something similar, catch on?
If we're at the point where they use ai to make form pdfs, might as well cut the middleman and ask the ai to generate a form on a website.
It's the same in the US. The ISP fiber network falls inside their security boundary in my experience - you can't BYOD. They install a modem (these days often including an integrated router, switch, and AP) and you receive either ethernet or wifi from them.
I think the only major change in that regard has been that coaxial cable providers here will often let you bring your own docsis modem these days.
I never found any of this concerning until quite recently. With the advent of ISPs providing public wifi service out of consumer endpoints as well as wifi based radar I'm no longer comfortable having vendor controlled wireless equipment in my home.
I use AI, what I'm tired of is shills and post-apocalyptic prophets
This is in tech now, were the first adopters, but soon it will come to other fields.
To your broader question
> Something that I think many students, indeed many people, struggle with is the question "why should I know anything?"
You should know things because these AIs are wrong all the time, because if you want any control in your life you need to be able to make an educated guess at what is true and what isn't.
As to how to teach students. I think we're in an age of experimentation here. I like the idea of letting students use all tools available for the job. But I also agree that if you do give exams and hw, you better make them hand written/oral only.
Overall, I think education needs to focus more on building portfolios for students, and focus less giving them grades.
Gosh that sounds horrifying. I am not an expert on that piece of system, no I do not want to take responsibility for whatever the LLMs have produced for that piece of system, I am not an expert and cannot verify it.
- Language
- Total LOC
- Subject matter expertise required
- Total dependency chain
- Subjective score (audited randomly)
And we can start doing some analysis. Otherwise we're pissing into ten kinds of winds.
My own subjective experience is earth shattering at webapps in html and css (because I'm terrible and slow at it), and annoyingly good but a bit wrong usually in planning and optimization in rust and horribly lost at systems design or debugging a reasonably large rust system.
Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.
In a functional work environment, you will build trust with your coworkers little by little. The pale equivalent in LLMs is improving system prompts and writing more and more ai directives that might or might not be followed.
"it is done because it's always done so"