Readit News logoReadit News
g42gregory commented on 73% of AI startups are just prompt engineering   pub.towardsai.net/i-rever... · Posted by u/kllrnohj
g42gregory · 3 months ago
Isn’t it a bit like saying, “X% of startups are just writing code”?
g42gregory commented on The Future of Fact-Checking Is Lies, I Guess   aphyr.com/posts/398-the-f... · Posted by u/speckx
saulpw · 3 months ago
How can we possibly stop this madness? Will it require draconian legislation and enforcement?

Increasingly I think that "free speech" should apply to humans only, not to humans armed with a gas-powered bullshit spewer.

g42gregory · 3 months ago
In the US, free speech protections are very selective (depending on what you planning to say). The rest of the Western world does not even have the laws protecting free speech. No need to worry.
g42gregory commented on YouTube Removes Windows 11 Bypass Tutorials, Claims 'Risk of Physical Harm'   news.itsfoss.com/youtube-... · Posted by u/WaitWaitWha
g42gregory · 3 months ago
Unfortunately, this brings an obvious question:

If they sensor something like this, how could we trust platforms with the actually important subjects?

g42gregory commented on Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM   phoronix.com/review/intel... · Posted by u/wrigby
g42gregory · 4 months ago
Anybody knows memory bandwidth?
g42gregory commented on Sora Update #1   blog.samaltman.com/sora-u... · Posted by u/davidbarker
g42gregory · 4 months ago
It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.

Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.

Generation for personal use is not illegal, as far as I know.

g42gregory commented on Evaluating the impact of AI on the labor market: Current state of affairs   budgetlab.yale.edu/resear... · Posted by u/Bender
rhetocj23 · 4 months ago
Forget about that.

Lets focus on the tech firms that produce software.

Two things should happen if AI proliferates into software development:

1) Increasing top line - due to more projects being taken by enabling labour to be more productive 2) Operating margin increasing - due to labour input declining and taking more cost-reduction projects

If those 2 things dont occur - the AI investment was a waste of money from a financial perspective. And this is before I even discount the cash flows by the cost of capital of these high-risk projects (high discount rate).

At some point everyone will be analysed in this manner. Only Nvidia is winning as it stands, ironically, not because of LLMs. But rather because they sell the hardware that LLMs operate on.

g42gregory · 4 months ago
I would also add that many (most?) companies/entities do not sell software but have large IT departments that could write software for internal consumption. Think Exxon, BP, Caterpillar, Airlines, Gov Labs/agencies, DOD, etc...

Internally, they could actually write 1,000X more software and it will be absorbed by internal customers. They will buy less packaged software from tech firms (unless it's infrastructure), internally they could keep the same headcount or more, as AI allows them to write more software.

g42gregory commented on Claude Sonnet 4.5   anthropic.com/news/claude... · Posted by u/adocomplete
Implicated · 4 months ago
I'm not trying to be offensive here, feel the need to indicate that.

But that prompt leads me to believe that you're going to get rather 'random' results due to leaving SO much room for interpretation.

Also, in my experience, punctuation is important - particularly for pacing and grouping of logical 'parts' of a task and your prompt reads like a run on sentence.

Making a lot of assumptions here - but I bet if I were in your shoes and looking to write a prompt to start a task of a similar type that my prompt would have been 5 to 20x the length of yours (depending on complexity and importance) with far more detail, including overlapping of descriptions of various tasks (ie; potentially describing the same thing more than once in different ways in context/relation to other things to establish relation/hierarchy).

I'm glad you got what you needed - but these types of prompts and approaches are why I believe so many people think these models aren't useful.

You get out of them what you put into them. If you give them structured and well written requirements as well as a codebase that utilizes patterns you're going to get back something relative to that. No different than a developer - if you gave a junior coder, or some team of developers the following as a feature requirement: `implement a fuzzy search for conversations and reports either when selecting "Go to Conversation" or "Go to Report" and typing the title or when the user types in the title in the main input field, and none of the standard elements match, a search starts with a 2s delay` then you can't really be mad when you don't get back exactly what you wanted.

edit: To put it another way - spend a few more minutes on the initial task/prompt/description of your needs and you're likely to get back more of what you're expecting.

g42gregory · 4 months ago
I have to agree with this assessment. I am currently going at the rate of 300-400 lines of spec for 1,000 LOC with Claude Code. Specs are AI-assisted also, otherwise you might go crazy. :-) Plus 2,000+ lines of AI-generated tests. Pretty restrictive, but then it works just fine.

u/g42gregory

KarmaCake day3765November 12, 2013View Original