More perf means more attempts in parallel with some sort of arbiter model deciding what to pick. This can happen at the token, prompt, or agent level or all of them.
More perf means more attempts in parallel with some sort of arbiter model deciding what to pick. This can happen at the token, prompt, or agent level or all of them.
Here I am on my free-as-in-freedom operating system, making commits with my free DVCS tool in my free programmable text editor, building it with my free language toolchain, using my free terminal emulator/multiplexer with my free UNIX shell. VC backed tools like Warp and Zed that seek to innovate in this space are of zero interest to me as a developer.
Please please please, get paid rather than holding on too tightly to making things free forcing future enshittening.
Kotlin: constructor is either part of class definition or keyword constructor.
Ruby: initialize
JS: constructor
Python: ______new______, _______init_______
Literally this meme: https://knowyourmeme.com/memes/three-headed-dragon
I think it's fairly short sighted to criticize these. FWIW, I also did that the first time I wrote Python. Other languages that do similar things provide a useful transparency.
Finance and engineering both have a degree of verifiably. Building evals around finance is easier than, e.g., marketing work.
Just by virtue of being a go program it enables even more sophisticated validation and automation if you want to implement it.
Let's say I used a custom power saw to carve a statue faster than ever before and more precisely. Would that reduce my influence and my application of taste? No. I would in fact be able to produce a piece faster and have more room for making more attempts.
Neural network based art tools are all giving us the same thing - easier execution. This means greater production and the ability to try most possibilities. The fact that creating art is more accessible to the public means that more creatives can be in the arena, making for more competition.
Any creator grapples with this change over time. Woodworkers of old prefer their techniques to modern power tools, painters prefer physical media, carvers prefer real blocks of marble/whatever. All of these things have modern digital equivalents, but the establishment of existing artists refuse to leave their posts. They hold their ground that the medium is critical to the art.
Art moves and changes slowly because of this human bias against new solutions. Go to any museum of modern art and you'll find that most of it could have been executed as such 20+ years ago. It's just that art takes time to accept a new way of doing something.
I am not saying that the mechanism is perfect but it is more useful if we have it than not. IMO it's only weakness is that you never know if a new exception type is thrown by a nested function. This is a weakness for which we really don't have a solid solution - Java tried this with checked exceptions.
Go not using such a paradigm to me is bonkers. Practically every language has such a construct and how we use it best is pretty much convention these days.
As in, I want actual zero dependencies, not even the library itself. The reason: I never want these to randomly update.
You'd miss out on CVEs because you don't use the common dependency paradigm.
You'd also miss out on bug fixes if you are not detecting the bug itself.
Help me understand because I'm with you on less dependencies but this does feel a bit extreme.
That was a long preamble to this question: any senior devs out there (20+ years) who enjoy using Cursor? What’s the trick?
My prompts usually resemble actions I could tell a college student — they just have a better understanding of concepts and professional lingo.
The benefit of this approach is that you know the code fairly well. You are staying with the LLM in developing a deeper understanding of the code you'll ultimately create a PR for. Then when there's an incident, you have enough deep knowledge of the code that you can be tactical.
I have found that until I trust AI to develop the code unsupervised, I have to have an equally good mental model of everything AI makes.
I assume the former has massive overhead, but maybe it is worthwhile to keep responsiveness up for everyone.