Readit News logoReadit News
handoflixue commented on Writing with LLM is not a shame   reflexions.florianernotte... · Posted by u/flornt
satisfice · 3 hours ago
The author of this piece commits a common mistake: analyzing AI use as if communication is nothing more than an isolated transaction. Instead communication is usually a process of creating and maintaining a relationship of some kind with other people.

Here’s a thought experiment: Imagine if I handed you a $100 bill and asked you to examine it carefully. Is it real money? Perhaps you immediately suspect it is counterfeit, and subject it to stringent tests. Let’s say all the tests pass. Okay, given that it is indistinguishable from a legit $100 bill, is it therefore correct and ethical for me to spend this money?

You know the answer: “not necessarily.”

This is because spending money is about more than a series of steps in a transaction. It is based on certain premises that, if false, represent a hazard to the social contract by which we all live in peace and security.

It seems to me that many AI fanboys are arguing that as long as their money passes your scrutiny, it doesn’t matter if it was stolen or counterfeit. In some narrow sense, it really doesn’t matter. But narrow senses are not the only ones that matter.

When I read writing that you give me and present it as your work, I am getting to know you. I am learning how I can trust you. I am building a simulation of you in my mind that I use to anticipate your ideas and deeds. All that is disrupted and tainted by AI.

It’s not comparable to a grammar checker, because grammar is like clothing. When an editor modifies my grammar, this does not change my message or prevent me from getting across my ideas. But AI is capable of completely altering your ideas. How do you know it didn’t?

You can only know through careful proofreading. Did you proofread carefully? Whether you did or not: I don’t believe that people who want AI to write for them are the kind of people who carefully proofread what comes out of AI. And of course, if you ask AI to come up with ideas by itself, for all we know that is plagiarism— stolen words.

Therefore: if you use AI in your writing, you better hide that from me. And if I find out you are using, I will never trust you again.

handoflixue · an hour ago
Every day cashiers accept $100 bills on the basis that they pass the counterfeit tests, and every day society has failed to collapse from what you posit is a "hazard to the social contract"
handoflixue commented on Writing with LLM is not a shame   reflexions.florianernotte... · Posted by u/flornt
monkaiju · 4 hours ago
I like the distinction between syntactic tools, like spellcheck, and semantic tools, like AI. The former clearly doesn't impugn the author, the latter does. They seem clearly and fundamentally different to me.
handoflixue · an hour ago
Where do you put the line? What do you do with the ambiguous categories?

Clearly a trucker does not "deliver goods" and a Taxi Driver is not in the business of ferrying passengers - the vehicle does all of that, right?

Writers these days rarely bother with the actual act of writing now that we have typing.

I've rarely heard a musician, but I've heard lots of CDs and they're really quite good - much cheaper than musicians, too.

Is my camera an artist, or is it just plagiarizing the landscape and architecture?

handoflixue commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
CGamesPlay · 9 days ago
The bastawhiz comment in this thread has the right answer. When you start a new conversation, Claude has no context from the previous one and so all the "wearing down" you did via repeated asks, leading questions, or other prompt techniques is effectively thrown out. For a non-determined attacker, this is likely sufficient, which makes it a good defense-in-depth strategy (Anthropic defending against screenshots of their models describing sex with minors).
handoflixue · 9 days ago
Worth noting: an edited branch still has most of the context - everything up to the edited message. So this just sets an upper-bound on how much abuse can be in one context window.
handoflixue commented on Claude Opus 4 and 4.1 can now end a rare subset of conversations   anthropic.com/research/en... · Posted by u/virgildotcodes
postalcoder · 9 days ago

  > There's not a good reason to do this for the user.
Yes, even more so when encountering false positives. Today I asked about a pasta recipe. It told me to throw some anchovies in there. I responded with: "I have dried anchovies." Claude then ended my conversation due to content policies.

handoflixue · 9 days ago
The NEW termination method, from the article, will just say "Claude ended the conversation"

If you get "This conversation was ended due to our Acceptable Usage Policy", that's a different termination. It's been VERY glitchy the past couple of weeks. I've had the most random topics get flagged here - at one point I couldn't say "ROT13" without it flagging me, despite discussing that exact topic in depth the day before, and then the day after!

If you hit "EDIT" on your last message, you can branch to an un-terminated conversation.

handoflixue commented on Is chain-of-thought AI reasoning a mirage?   seangoedecke.com/real-rea... · Posted by u/ingve
js8 · 10 days ago
I think LLM's chain of thought is reasoning. When trained, LLM sees lot of examples like "All men are mortal. Socrates is a man." followed by "Therefore, Socrates is mortal.". This causes the transformer to learn rule "All A are B. C is A." is often followed by "Therefore, C is B." And so it can apply this logical rule, predictively. (I have converted the example from latent space to human language for clarity.)

Unfortunately, sometimes LLM also learns "All A are C. All B are C." is followed by "Therefore, A is B.", due to bad example in the training data. (More insidiously, it might learn this rule only in a special case.)

So it learns some logic rules but not consistently. This lack of consistency will cause it to fail on larger problems.

I think NNs (transformers) could be great in heuristic suggesting which valid logical rules (could be even modal or fuzzy logic) to apply in order to solve a certain formalized problem, but not so great at coming up with the logic rules themselves. They could also be great at transforming the original problem/question from human language into some formal logic, that would then be resolved using heuristic search.

handoflixue · 10 days ago
Humans are also notoriously bad at this, so we have plenty of evidence that this lack of consistency does indeed cause failures on larger problems.
handoflixue commented on Scapegoating the Algorithm   asteriskmag.com/issues/11... · Posted by u/fmblwntr
like_any_other · 12 days ago
> What is worse they admit it, even have systems for correcting errors publicly

Errors, even lies, happen, but they are negligible compared to the most powerful tool of propaganda: cherrypicking. E.g. the otherwise thorough NYT reporting on air traffic controller shortages [1] entirely omitted the FAA diversity hiring scandal that disqualified applicants with top grades if they weren't diverse enough [2]. During COVID, the credible experts were happily making models of how many deaths a motorcycle rally caused [3], but when it came time to do the same for BLM, we instead got "Protest Is a Profound Public Health Intervention" [4]. This is not an outlier - social scientists have been turning a blind eye to results they dislike since at least 1985 [5].

[1] https://www.nytimes.com/2023/12/02/business/air-traffic-cont...

[2] https://www.tracingwoodgrains.com/p/the-full-story-of-the-fa...

[3] https://pmc.ncbi.nlm.nih.gov/articles/PMC7753804/

[4] https://time.com/5848212/doctors-supporting-protests/

[5] The authors also submitted different test studies to different peer-review boards. The methodology was identical, and the variable was that the purported findings either went for, or against, the liberal worldview (for example, one found evidence of discrimination against minority groups, and another found evidence of "reverse discrimination" against straight white males). Despite equal methodological strengths, the studies that went against the liberal worldview were criticized and rejected, and those that went with it were not. - from https://theweek.com/articles/441474/how-academias-liberal-bi..., citing the study "Human subjects review, personal values, and the regulation of social science research.": https://psycnet.apa.org/record/1986-12806-001

handoflixue · 12 days ago
Are you under the assumption that this is new?

A few cherry-picked examples seems like a really weird way to try and prove cherry-picking is happening, much less establish "cherry picking has become worse since a certain date"

handoflixue commented on Optimizing my sleep around Claude usage limits   mattwie.se/no-sleep-till-... · Posted by u/mattwiese
chatmasta · 12 days ago
And yet businesses seem to have no trouble paying for multiple accounts. I’m sure OP could register as a business or even recruit a friend to pay for an account on his behalf. I don’t think Anthropic cares as long as you’re paying them for the two accounts…
handoflixue · 12 days ago
Business plans usually require a 5 seat minimum, charge per seat, and have different pricing levels - but yeah, nothing stops you from registering an LLC. Namecheap is even running a special: Buy a domain name and get an LLC for free: https://www.namecheap.com/apps/business-starter-kit/

(not affiliated, I was just very surprised when they tried to upsell me last time I renewed my domain :))

handoflixue commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
nyeah · 12 days ago
I'm feeling a little frustrated by the derail. My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality. The small group might well go on to claim that people outside the group lack rationality. That would be absurd. The mental health profession do not claim to be immune from mental illness themselves, they do not claim that people outside their circle are mentally unhealthy, and they do not claim that their particular treatment is necessary for mental health.

I guess it's possible you might be doing some deep ironic thing by providing a seemingly sincere example of what I'm complaining about. If so it was over my head but in that case I withdraw "derail"!

handoflixue · 12 days ago
> My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality.

"Rationalists" don't claim a monopoly any more than Psychiatry does.

> The small group might well go on to claim that people outside the group lack rationality.

Again, something that psychiatry is quite noteworthy about: the entire point of the profession is to tell non-professionals that they're doing Emotionally Healthy wrong.

> The mental health profession do not claim to be immune from mental illness themselves,

Rationalist don't claim to be immune to irrationality, and this is in fact repeatedly emphasized: numerous cornerstone articles are about "wow, I really fucked up at this Rationality thing", including articles by Eliezer.

> they do not claim that people outside their circle are mentally unhealthy

... what?

So if I go to a psychiatrist, you think they're gonna say I'm FINE? No matter what?

Have you ever heard of "involuntary commitment"?

> and they do not claim that their particular treatment is necessary for mental health.

Again, this is about as true as it is for rationalists.

handoflixue commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
nyeah · 12 days ago
No. I have no beef with psychology or psychiatry. They're doing good work as far as I can tell. I am poking fun at people who take "rationality" and turn it into a brand name.
handoflixue · 12 days ago
Why is "you can work to avoid cognitive biases" more ridiculous than "you can work to improve your mental health"?
handoflixue commented on Why are there so many rationalist cults?   asteriskmag.com/issues/11... · Posted by u/glenstein
noqc · 12 days ago
Perhaps I will get downvoted to death again for saying so, but the obvious answer is because the name "rationalist" is structurally indistinguishable from the name "scientology" or "the illuminati". You attract people who are desperate for an authority to appeal to, but for whatever reason are no longer affiliated with the church of their youth. Even a rationalist movement which held nothing as dogma would attract people seeking dogma, and dogma would form.

The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.

handoflixue · 12 days ago
> Obviously the article intends to make the case that this is a cult

The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?

Obviously you and I have very different definitions of "obvious"

u/handoflixue

KarmaCake day1075March 21, 2019View Original