>The spying is revealed in a January intelligence bulletin produced by the Border Patrol and leaked to me.
I'm sure this won't inadvertently flag nerf/band guns, models, tubes/pipes, etc...
Until metal 3D printing becomes common for consumers, this isn't really a big deal. Plastic components have limited lifespan and even questionable safety. It's pretty much always been legal to create your own firearms. Blocking some 3D printers isn't going to stop that. If nothing else, the criminal enterprises will just use out of date software from before the ban and even create their own 3D printers.
3D printing companies need to simply exit the NY market, including the industrial sector. Once you start inspecting businesses, education, and enough individuals, they will cave.
Each one of these actions is a stepping stone the world is taking as a direct consequence of U.S. political negligence. And however difficult it was to render this consequence, it will be tenfold, or hundredfold, as difficult to reverse course.
Since AI has been a thing, I’ve been in a customer facing cloud consulting role - working full time at consulting departments (AWS ProServe) and now a third party company - specializing in app dev.
Before my hands actually write a line of code or infrastructure as code, I’ve already spoken to sales to get a high level idea of what the customer wants, read over the contract (SoW) to see what questions I have, done discovery sessions/requirements analysis, created architecture diagrams, done a design review, created detailed stories/workstreams (epics), thought about all the way things can go wrong etc.
I very much keep my hands on the wheel and treat AI as a junior coder that might not follow my instructions. I can answer any question about architectural decisions, repo structure, what any Lambda does the naming conventions etc.
I’ve also intuited “these are the things that I need to think about and test for from my 30 years of professional experience as a developer and 8 years of experience across literally dozens of AWS implementations”.
In the before times, if I were doing this without AI, I would have to have two or three more junior people doing the work just because I couldn’t physically do it in 40 hours a week. Even then I would be focused on how it works and look for corner cases.
I don’t have to think about what I need to test for. I did specifically call out concurrency because there are subtle bugs.
Ironically, what I am working on now had a subtle concurrent locking bug that Codex wrote. I threw the code into ChatGPT thinking mode and it found it immediately and suggested better alternatives. I also have Claude and Codex cross check each other.
Good luck then. The business process flow including edge cases should arguably be top of mind for what to test. Testing shouldn't be an afterthought but rather an integral thought when writing the code that needs to be tested.
"I would have to have two or three more junior people doing the work"
Yeah, and they're the ones thinking about testing the code they write. Architects (which it sounds like you are an architect and not a dev) don't get into thay much detail.
> testing
This does not match my experience, have been working with LLM since 2023. We presently use the latest models, I assure you. We can definitely afford it.
I am not saying LLM is worthless, but being able to check its outputs is still necessary at this stage, because as you said, it is non-deterministic.
We have had multiple customer impacting events from code juniors committed without understanding it. Please read my top level comment in this post for context.
I genuinely hope you do not encounter issues due to your confidence in LLM, but again, my experience does not match yours.
Edit: Would also add that LLM is not good at determining line numbers in a code file, another flaw that causes a lot of confusion.
They promoted that guy over me because he started closing more stories than me and faster after he started using Copilot. No wonder that team has 40% of its capacity used for rework and tech debt...
The problem is that LLM mess up things as basic as math and dates, and that's before the context gets too large and it starts making other mistakes.
Edit: Also LLM over mock tests and juniors trust that...
This is different than what you've done for the past 40 years becuase you're not testing your code. This would be analogous to you testing someone else's code. The vast majority of people and places have not followed that paradigm until AI showed up.
You run it and check for your desired behavior.
Before you (or your devs) could write code a couple different ways and understand it. Now you have to look a code generated by an agent that is not necessary writing code in the same way as the culture at your company. There might be a thousand different ways a feature gets written. You have to spend more time reviewing and thinking it about it in my opinion.