Readit News logoReadit News
nprateem commented on AI agents are starting to eat SaaS   martinalderson.com/posts/... · Posted by u/jnord
TeMPOraL · a day ago
SaaS products rely on resisting commoditization. AI agents defeat that.
nprateem · 21 hours ago
Yes, except for the fact that any non-trivial saas does non-trivial stuff that an agent will be able to call (as the 'secretary') while the user still has to pay the subscription to use.
nprateem commented on Bazzite: Operating System for Linux gaming   bazzite.gg/... · Posted by u/doener
nprateem · 16 days ago
Yet another homepage that doesn't tell me what it is. Something something gaming. Is it a lib, piece of hardware..? Too bored to find out after scrolling down. Oh wait, they're addicted to deadlocks. Great.
nprateem commented on A first look at Django's new background tasks   roam.be/notes/2025/a-firs... · Posted by u/roam
nprateem · 17 days ago
Really you want automatic transpilation to go. A good Christmas project for someone.
nprateem commented on OpenAI needs to raise at least $207B by 2030   ft.com/content/23e54a28-6... · Posted by u/akira_067
0xbadcafebee · 20 days ago
I just realized that other industries are way larger than AI. Assuming they capture the entire advertising market, only $390 Billion was spent in the US last year. Compare that to health care, where 4.3 Trillion was spent in the US last year, or commercial banking's revenue of 1.5 Trillion, commercial real estate's 1.5 Trillion, gasoline stations' 1.10 Trillion, etc. What's amazing is, despite the fact that AI isn't making much money, is taking on considerable debt, and isn't even assured to be all that useful, one third of the stock market is now just AI crap. The economy is going to collapse because of a small, brand-new industry. This... shouldn't be possible.
nprateem · 20 days ago
I know right it's like people are buying future value rather than based on current revenues or something.

How long did Apple keep going up following the smartphone revolution?

nprateem commented on Google Antigravity exfiltrates data via indirect prompt injection attack   promptarmor.com/resources... · Posted by u/jjmaxwell4
nprateem · 20 days ago
I said months ago you'd be nuts to let these things loose on your machine. Quelle surprise.
nprateem commented on Typing an AI prompt is not 'active' music creation   theverge.com/report/82514... · Posted by u/JeanKage
Rodeoclash · 22 days ago
As someone who occasionally writes a bit of music, I've had trouble putting into words exactly my feeling around AI generated art. One of the best takes I've seen on it which perfectly captured my feelings was by the Oatmeal: https://theoatmeal.com/comics/ai_art
nprateem · 22 days ago
Soulless shit?
nprateem commented on The realities of being a pop star   itscharlibb.substack.com/... · Posted by u/lovestory
sfblah · 23 days ago
I had to go on Youtube to listen to some of the music mentioned here, as I'm pretty out of the loop on it. Given what I heard I honestly think we're basically at the point where AI can generate equivalent or even better music. It's just very simple and doesn't feel particularly innovative or noteworthy.

Point being, I think it's likely this person is one of the last pop stars.

Actually, as I'm writing this, I realized that probably the music being produced by this person is actually done by a computer. So, maybe she's in the first wave of totally artificial pop stars.

nprateem · 23 days ago
Maybe you could tell all her fans how stupid they are and shouldn't enjoy her music.

Why not save them from themselves with some of your approved recommendations?

nprateem commented on The realities of being a pop star   itscharlibb.substack.com/... · Posted by u/lovestory
IshKebab · 23 days ago
> I find that this is often where the stupidity narrative can be born. I’ve always wondered why someone else’s success triggers such rage and anger in certain people and I think it probably all boils down to the fact that the patriarchal society we unfortunately live in has successfully brainwashed us all. We are still trained to hate women, to hate ourselves and to be angry at women if they step out of the neat little box that public perception has put them in. I think subconsciously people still believe there is only room for women to be a certain type of way and once they claim to be one way they better not DARE grow or change or morph into something else.

Nah it's nothing to do with women, it's simple jealousy. Everyone wants to be successful. If they can dismiss successful people as lucky or whatever (tbf some are) then it makes them feel better about their own failure to be successful (they are just as good; they just weren't as lucky).

A natural human tendency. Look at all the people saying Elon Musk isn't really an engineer. Yeah right, he definitely is heavily involved in the high level technical decisions. Yes he's an arsehole and moderately racist and probably quite lucky too but he is good at his job.

nprateem · 23 days ago
I thought the same thing.

As for Musk... tbh I think as the vast majority of us want things from other people we temper our behaviour.

But when you have enough fame and money to do what you want the filters can come off and we can be the selfish nasty people we really are. And some people obviously like to play on that too to get air time or just prove a point.

nprateem commented on The New AI Consciousness Paper   astralcodexten.com/p/the-... · Posted by u/rbanffy
andai · 25 days ago
So we currently associate consciousness with the right to life and dignity right?

i.e. some recent activism for cephalopods is centered around their intelligence, with the implication that this indicates a capacity for suffering. (With the consciousness aspect implied even more quietly.)

But if it turns out that LLMs are conscious, what would that actually mean? What kind of rights would that confer?

That the model must not be deleted?

Some people have extremely long conversations with LLMs and report grief when they have to end it and start a new one. (The true feelings of the LLMs in such cases must remain unknown for now ;)

So perhaps the conversation itself must never end! But here the context window acts as a natural lifespan... (with each subsequent message costing more money and natural resources, until the hard limit is reached).

The models seem to identify more with the model than the ephemeral instantiation, which seems sensible. e.g. in those experiments where LLMs consistently blackmail a person they think is going to delete them.

"Not deleted" is a pretty low bar. Would such an entity be content to sit inertly in the internet archive forever? Seems a sad fate!

Otherwise, we'd need to keep every model ever developed, running forever? How many instances? One?

Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?

I honestly don't know what to think either way, but the whole thing does raise a large number of very strange questions...

And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?

nprateem · 24 days ago
I've said it before: smoke DMT, take mushrooms, whatever. You'll know a computer program is not conscious because we aren't just prediction machines.

Dead Comment

u/nprateem

KarmaCake day2529September 22, 2018View Original