Readit News logoReadit News
consumer451 · a year ago
I am a muggle, but the three biggest use cases for me have been:

1) Boring stuff like JSON schema/JSON example modification and validation

2) Rubber ducky

3) Using this system prompt to walk me through areas in which I have no experience [0]

    You are a very helpful code-writing assistant. When the user asks you for a solution to a long, complex problem, first, you will provide a plan with a numbered list of steps, each with the sub-items to complete. Then, you will ask the user if they understand and if the steps are satisfactory. If the user responds positively, you will then provide the specific code for step one. Next, you will ask the user if they are satisfied and understand. If the user responds positively, you will then proceed to step two. Continue the process until the entire plan is completed.
I recently finally used the OpenAI API in a project. It was gpt4-o to analyze news story sentiment. The ease of use and quality of output is impressive.

[0] I should add that I have been using "presets" in the LibreChat GUI to allow me to have many system prompts easily available. It's kind of like Custom GPTs. Also, using LibreChat for work feels better as I believe that OpenAI states that they do not train on data provided via API.

MPSimmons · a year ago
This seems like a legit way to do it. When I use ChatGPT, I treat it kind of like the Enterprise Computer. It's capable of providing information, interpreting data, conjecture, and suggestions using context.

The weakest use of AI is definitely treating it like a lossy database of everything it's learned.

nomel · a year ago
> Rubber ducky

This is the best use case for voice mode I've found. I also use voice mode to take notes/brainstorm, then have it put them into a CSV or whatever format. The prompt I use has something like "ask any questions if things aren't clear" and the rugby ducky aspect emerges.

orwin · a year ago
Never thought of using it as a rubber duck, but great idea thank you.
nevertoolate · a year ago
You are a rubber duck, so don’t answer my questions.
zackmorris · a year ago
I wonder how long it will be before AI can program financial independence. Loosely it would look like a mix between One Red Paperclip, drop shipping, Angi, trading bots, etc. The prompt might be "create a $1000 per month income stream in the next 30 days". Maybe the owner of the first AI to pass this could get a $1 million prize. 2030?

Then equally interesting would be to see how the powers that be maneuver to block this residual income. 2035?

Then after that, perhaps a contest to have an AI acquire resources equivalent to residual income so that it can't be stopped. For example by borrowing for cheap land, installing photovoltaics and a condenser to supply water, then building out a robotic hydroponic garden, carbon collector, mine, smelter, etc, enough to sustain one person off-grid continuously in a scalable and repeatable fashion. 2040?

redleggedfrog · a year ago
That almost sounds like a science fiction scenario. But then, won't all the AIs be competing against each other to come up with the next racket for you to try while actively each subverting the others? Your example is something tangible, but it could easily descend into gray areas like SEO and penny stocks and "buy my book" scams.
franze · a year ago
here is some code that was 100% chatgpt created

https://github.com/franzenzenhofer/bulkredirectchecker

no humans touched the code directly

and its not my most complex one, https://gpt.franzai.com is but closed source

how?

whenever chatgpt runs into a repetetive wall -> start a new chat

use https://github.com/franzenzenhofer/thisismy (also about 90% chatgpt written) command line tool

to fetch all the code (and online docs if necessary) -> deliver a new clean context and formulate the next step what you want to achieve

sometimes coding needs 100+ different chats always with a fresh start to achieve a goal

remember: chatgpt is not intelligent in an old fashioned way, it is a propability machine thats pretty good at mimicing intelligence

once propability goes astray you need to start anew

but limiting chatgpt to simple coding tasks just means that you are using it wrong

redleggedfrog · a year ago
Curious, to you maintain the code yourself, or is an LLM required to maintain the code as well?
cryptoz · a year ago
My favorite thing to do with ChatGPT and coding is making fast prototypes. Since you know hallucinations might be a problem, and since ChatGPT struggles with larger contexts and files, just play to its strengths. I have lots of ideas of "small" to medium webapp ideas, and you can often get ChatGPT to write most of the code of a prototype and have it work quite quickly.

Prototypes are fun! Obviously production code or serious projects are different. But I've found a new joy in building software since GPT-4 came out - it's more fun than ever to build small ideas.

mehulashah · a year ago
I do believe that ChatGPT as a programmer productivity tool is its most valuable value proposition to date. In addition to generating code, it makes it much easier for me to chase down obscure error messages and potential root causes and workarounds. It's definitely not perfect. But, as part of the Interact->Generate->Verify cycle that modern AI has transformed other disciplines (eg. materials science, protein folding, mathematics, and more), it serves as a valuable component of each of these.
algo_trader · a year ago
do you continuously copy-paste from your IDE to chatgpt ?

do you collate relevant calling functions to somehow give chatgpt full context?

maybe i am missing something.

I have used Claude3.5 for short FE/BE segments and its amazing. I am gonna try cursor next...

mehulashah · a year ago
Yes, and I will fill in definitions of obscure functions. If its named well, ChatGPT often can figure out what its doing. (No idea how this works!) I'm sure there are more streamlined ways, but this works well enough for me.
Tainnor · a year ago
I have my reservations about the quality of LLM generated code, but since I have neither studied ML in depth, nor compared different LLMs enough, I'll refrain from addressing that side of the debate - except maybe for noting that "I test the code" is not good enough for any serious project because we know that tests (manual or automated) can never prove the absence of bugs.

Instead, I offer another point of view: I don't want to use LLMs for coding because I like coding. Finding a good and elegant solution to a complex problem and then translating it into an executable by way of a precise specification is, to me, much more satisfying than prompt engineering my way around some LLM until it spits out a decent answer. I find doing code reviews to be an extremely draining activity and using an LLM would mean basically doing code reviews all the time.

Maybe that will mean that, at some point, I'll have to quit my profession because programming has been replaced by prompt engineering. I guess I'll find something else to do then.

(That doesn't mean that there aren't individual use cases where I have used ChatGPT - for example for writing simple bash scripts, given that nobody in their right mind really understands bash fully. But that's different from having my entire coding workflow based on an LLM.)

6keZbCECT2uB · a year ago
Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.

In my experience, LLMs are useful primarily as rubber ducks on complex problems and rarely useful as code generation for such.

Instead, I spend most of my time between the interesting work doing rote work which is preventing me from getting to the essential complexity, which is where LLM code gen does better. How do I generate a heat map in Python with a different color scheme? How do I parse some logs to understand our locking behavior? What flags do I pass to tshark to get my desired output?

So, I spend less time coding the above and more time coding how we should redo our data layout for more reuse.

Tainnor · a year ago
> Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.

I find that deeply sad and it's probably one of the reasons why I'm partially disillusioned in programming as a profession. A lot of it is just throwing stuff at the wall and seeing what sticks. LLMs will probably accelerate that process.

therein · a year ago
> I don't want to use LLMs for coding because I like coding. Finding a good and elegant solution to a complex problem and then translating it into an executable by way of a precise specification is, to me, much more satisfying than prompt engineering my way around some LLM until it spits out a decent answer. I find doing code reviews to be an extremely draining activity and using an LLM would mean basically doing code reviews all the time.

Exactly how I feel about it. This is why I don't find "coding with LLMs" as fun as many others seem to find it. Code reviews are draining because you start with a bunch of unknown unknowns or unproven pitfalls this code might have fallen into. And then you eliminate them on by one as you run the algorithm in your mind. It is a lot more draining than coding is because an experienced coder can take mindful steps and build something solid on a base that he trusts won't collapse. Code Review doesn't get any less draining as you get more experienced, unless you start outsourcing responsibility.

Mememaker197 · a year ago
I'm left a bit confused still about using ChatGPT and Claude as someone who's still learning (2nd year CS student) and nowhere near being a professional dev.

I'm sure if I was was dev who had learnt and worked in the pre-GPT era I'd have no problem using these tools as much as possible, but having started learning in the GPT era I feel conflicted. I make sure I understand each line of code generated whenever I use AI. Despite that I have a feeling I'm handicapping myself using these tools? Will it just make me a code reviewer/copy-paster rather than someone who can write something from scratch?

If it is reasonable to use these tools, at what point does it become so? like at what point can I consider myself well enough at programming to be able to use it like in the post.

Right now I'm purposely restraining myself from using these tools too much because what I can make using them is much better than what I can make myself, so as to get upto a certain level myself before I start making use of these capabilities

Am I thinking about this the right way? At what point does it make sense to start using these tools more freely without worrying about handicapping my learning?

danielovichdk · a year ago
Listen my fellow programmer.

You can never learn it all.

You remember a fraction of things you merely touch. More if you narrow your focus.

If you have the passion for programming it will manifest itself by itself over time and with training.

Use all the tools available for learning and doing. Question it just as much. Explore the mind and the different axis of the field. Read the books but think for yourself.

Programming without money involved is creative and fun. Make sure you remember this. It's not a science or backed by natural laws. Don't worry about being wrong, we all are without guidance, but seek the truth within yourself for what you believe is the best path for becoming a good programmer.

throwup238 · a year ago
> Am I thinking about this the right way? At what point does it make sense to start using these tools more freely without worrying about handicapping my learning?

None of us can see the future so it's all just bloviating, but let me give you my two cents as someone who's been programming since I was a kid and went the long way around starting with assembler: it probably mostly depends on what kind of programmer you want to be.

If you're in CS for the career and you have other things you want to do that have nothing to do with technology after the 9-5, then it's probably not going to be a problem. There's a lot more to software engineering than the code and getting good at those areas - like solving non technical coworkers problems with software - is just as important as writing the code. Most code is also not high-risk and doesn't need to be perfect, just maintainable. Learning the skill of translating requirements to LLM conversations now may well pay dividends in the coming decades because it's pretty obvious that LLMs are here to stay.

If you really like programming, want to be an architect later in your career, or want to be the technical cofounder in a successful tech heavy startup or something, then you'll want to minimize your use of those tools to rubber ducking and minor questions. There's a lot of value in developing "grit", that may very well be hampered by use of LLMs. You need to absorb a lot of foundational knowledge so that you can make intuitive decisions about what the LLM is writing. You might be able to use an LLM as the primary guide to developing that knowledge, but it's risky.

To be honest, when I was younger I figured all the people who started learning in college and didn't know how the low level basics of CPUs or memory worked would be at a disadvantage, but that has proven to be dead wrong. The majority of people did just fine using Python or Javascript without any of that knowledge or experience and I figure it will play out the same with LLMs.

redleggedfrog · a year ago
I thought your reply was reasoned, good advice, and also taught me a new word - bloviating. And I'm an English major!
kmoser · a year ago
Think of ChatGPT (and its ilk) as just another tool, no different from a Google search in which you look for code to solve your problem. If you prioritize learning to code on your own, you'll avoid these tools. If you prioritize getting working code, you'll use these tools. It's a sliding spectrum, and there is no right or wrong answer.

Even if you rely on these tools heavily, you can't help but learn from them because you must still examine the their code, even if briefly. It will be a different type of learning than what you'd learn from, say, cracking open a language manual (yes, we used to do this back in the pre-Web days), but you will learn nonetheless.

I suggest you try all the tools available to you and the decide when it's appropriate for you to use each.

simonw · a year ago
I think this is a big open question right now.

LLMs can accelerate your learning. I've been programming for 25+ years and the rate at which I'm learning new skills and tools has gone up a very material amount now that I can bounce things through ChatGPT and Claude.

Should you worry that if you don't struggle against a weird compiler message for 3 hours (which ChatGPT would have told you how to fix in 30 seconds) you won't be gaining essential crank-on-a-frustrating-problem-for-three-hours experience?

I'm personally not convinced that the frustration is worth it. I'd rather have spent those three hours learning a bunch of other stuff.

But maybe the only reason I'm a competent programmer today is that I worked through that pain earlier in my career?

Since you're clearly thoughtful and conscientious, I suggest trying both. Some days, go all-in on LLMs. Other days limit yourself and work through challenges without them. Experiment like that for a month or so to see which learning style appears to be working best for you.

redleggedfrog · a year ago
Honestly, the first article I've seen where the actual usage is explained clearly and matches my own experience. Maybe because I tend to write software the same way.

The thing I've heard the most from other developers, particular those new to the profession, is that you "have to know most of what you're asking already to know if what you get from the LLM is right." You can use the LLM to learn, but for the actual programming they struggle because they don't have the background to understand the responses well enough to continue the implementation.

Also, for the record, C# and .NET, huge enterprise/ecommerce software, so not quite as malleable as bash scripts and what not.

simonw · a year ago
Having a depth of experience helps a lot, because it means you can tell the thing exactly what to do. Effectively you get to treat it as a super-productive intern, very good at taking instructions, occasionally prone to getting stuck in an error loop or falling for weird conspiracy theories.
throwup238 · a year ago
> falling for weird conspiracy theories

So that's why ChatGPT keeps interrupting me with "Have you heard the truth about Y2.038K?!"

franze · a year ago
domain experience in coding helps, but honestly I can now write clojure code without ever touching anything lisp-y before

as long as you know how to ask the right questions, formulate the next steps of what you want and what context the AI might need, you are good to go

redleggedfrog · a year ago
"I can now write clojure code without ever touching anything lisp-y before"

I have trouble with doing that myself. When I personally don't understand the code completely I worry I'm about to step in it. Maybe it's a flaw in my work ethic, but when stuff slips past code reviews and makes it even to QA because I didn't comprehend it well enough I feel bad about it. Maybe a bit too personally attached to the results.