what do you think of recent MIT news that 95% gen ai projects don't do anything valuable at all ?
- I'm not a researcher and not fine tuning or deploying models on GPUs
- I have a math/traditional ML background, but my explanation of how transformers, tokenizers, etc work would be hand-wavy at best.
- I'm a "regular engineer" in the sense I'm following many of the standard SWE/SDLC practices in my org.
- I'm exclusively focused on building AI features for our product, I wear a PM hat too.
- I'm pretty tuned in to the latest model releases and capabilities of frontier models, and consider being able to articulate that information part of my job.
- I also use AI heavily to produce code, which is helpfully a pretty good way to get a sense for model capabilities.
Do I deserve a special job title...maybe? I think there's definitely an argument that "AI Engineering" really isn't a special thing, and considering how much of my day to day is pure integration work with the actual product, I can see that. OTOH, part of my job and my value at work is very product based. I pay a lot of attention to what other people in the industry are doing, new model releases, and how others are building things, since it's such a new area and there's no "standard playbook" yet for many things.
I actually quite enjoy it since there's a ton of opportunity to be creative. When AI first started becoming big I thought about doing the other direction - leveraging my math/ML background to get deeper into GPUs and MLOps/research-lite kind of work. Instead I went in a more producty direction, which I don't regret yet.
CC development is not just development, not all types of development. it’s frontend JS based , and it’s backend development. Only those scenarios work.
Try creating native desktop or mobile app and it’s like a swamp of trial and error.
You have to learn by trial and error what documentation sets and instructions you have to provide at which moment and context and balance at with token cost.. it’s a multi dimensional problem for which there are no recipes that work.
On top of that your direct instructions to not use particular patterns or approaches gets forgotten and ignored y CC with later “you’re right, I should have…”. I am starting to think it’s not solvable by the user by providing docs, examples and instructions. That Claude must have native development baked in to the same level as they baked in the frontend and backend.
What I am getting to is - make a tool to manage those doc sets and contexts and instructions and allow to share those sets between users globally as recipes.
I would _never_ give an LLM access to any disk I own or control if it had anything more than read permissions
Agreed. The web will be better off for everyone if these sites die out. Google is what brought these into existence in the first place, so I find it funny Google is now going to be one of the ones helping to kill them. Almost like they accidentally realized SEO got out of control so they have to fix their mistake.
> "what is the type of wrench called for getting up into tight spaces"
> AI search gives me an overview of wrench types (I was looking for "basin wrench")
> new search "basin wrench amazon"
> new search "basin wrench lowes"
> maps.google.com "lowes"
Notably, the information I was looking for was general knowledge. The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting LLM-generated answers for me to find. These websites don't really need to exist now. I'm happy to funnel 100% of my traffic to websites that are representing real companies offering real services/info (ship me a wrench, sell me a wrench, show me a video on how to use the wrench, etc).
Woah hang on, I think this betrays a severe misunderstanding of what engineers do.
FWIW I was trained as a classical engineer (mechanical), but pretty much just write code these days. But I did have a past life as a not-SWE.
Most classical engineering fields deal with probabilistic system components all of the time. In fact I'd go as far as to say that inability to deal with probabilistic components is disqualifying from many engineering endeavors.
Process engineers for example have to account for human error rates. On a given production line with humans in a loop, the operators will sometimes screw up. Designing systems to detect these errors (which are highly probabilistic!), mitigate them, and reduce the occurrence rates of such errors is a huge part of the job.
Likewise even for regular mechanical engineers, there are probabilistic variances in manufacturing tolerances. Your specs are always given with confidence intervals (this metal sheet is 1mm thick +- 0.05mm) because of this. All of the designs you work on specifically account for this (hence safety margins!). The ways in which these probabilities combine and interact is a serious field of study.
Software engineering is unlike traditional engineering disciplines in that for most of its lifetime it's had the luxury of purely deterministic expectations. This is not true in nearly every other type of engineering.
If anything the advent of ML has introduced this element to software, and the ability to actually work with probabilistic outcomes is what separates those who are serious about this stuff vs. demoware hot air blowers.