Is this AI?
Deleted Comment
Is this AI?
How do you know the generated outputs are correct? Especially for unusual circumstances?
Say the scenario is a patch of road is densely covered with 5 mm ball bearings. I'm sure the model will happily spit out numbers, but are they reasonable? How do we know they are reasonable? Even if the prediction is ok, how do we fundamentally know that the prediction for 4 mm ball bearings won't be completely wrong?
There seems to be a lot of critical information missing.
Think of it more like unit tests. "In this synthetic scenario does the car stop as expected, does it continue as expected." You might hit some false negatives but there isn't a downside to that.
If it turns out your model has a blind spot for albino cows in a snow storm eating marshmallows, you might be able to catch that synthetically and spend some extra effort to prevent it.
We have seen at least 3 of these projects - the JustHTML one, the FastRender and this one. All started from beefy tests and specs. They show reimplementation without manual intervention kind of works.
Never was.
Not if you hire reasonably competent people. These days for vast majority of FOSS services all you need is an ability to spin up a VPS and run a number of simple Docker/Podman Compose commands, it can't be that hard.
I agree though there's many non-critical libraries that could be replaced with helper methods. It also coincides with more awareness of supply chain risks.
If you use a well regarded library, you can trust that most things in it were done with intention. If an expectation is violated, that's a learning opportunity.
With the AI firehose, you can't really treat it the same way. Bad patterns don't exactly stand out.
Maybe it'll be fine but I still expect to see a lot of code bases saddled with garbage for years to come.