Readit News logoReadit News
sram1337 commented on AI vs. Professional Authors Results   mark---lawrence.blogspot.... · Posted by u/biffles
Aeolun · 9 days ago
Personally I’m having a blast reading AI generated fiction. As long as the direction is human, and often enough corrected to keep the minor inconsistencies out, the results are pretty good.

For me it’s no different from generating code with Claude, except it’s generating prose. Without human direction your result ends up as garbage, but there’s no need to go and actually write all the prose yourself.

And I guess that just like with code, sometimes you have to hand craft something to make it truly good. But that’s probably not true for 80% of the story/code.

sram1337 · 9 days ago
I would love to read some of this. Where do you find AI generated fiction?
sram1337 commented on Dicing an Onion, the Mathematically Optimal Way   pudding.cool/2025/08/onio... · Posted by u/surprisetalk
sram1337 · 10 days ago
In my opinion there is no such thing as too much dork time. This post is fun, just like cooking. The onion-inspired font for the section titles is fun. The interactive graphs are fun. Also vibe coding is fun.

What was the point of this judgmental comment?

sram1337 commented on Offline.kids – Screen-free activities for kids   offline.kids/... · Posted by u/ascorbic
sram1337 · 22 days ago
Some context from the dev's blog (https://highrise.digital/blog/building-offline-kids-a-direct...

---

"Within the last few weeks, Mark and I have built and launched Offline.Kids.

It’s a website to help parents reconnect with their kids and for kids to reconnect with the world around them.

Offline.Kids is directory of screen-free activities for all ages. Each activity is categorised so that parents can find appropriate activities for their situation.

For example, you can find:

quick, clean activities for a 6 year olds outdoor kids activities that take 1-2 hours low energy indoor crafts We built the site off the back of our new directory landing page plugin (catchy name still in progress!). It instantly creates thousands of SEO friendly landing pages for the activities. It’s early days, but Google is successfully indexing the pages and we’ll see how the rankings change over time.

So, if you’re looking for screen-free activities for your kids, check out the website, and share with anyone you think might find it useful!"

sram1337 commented on Writing is thinking   nature.com/articles/s4422... · Posted by u/__rito__
sram1337 · a month ago
sram1337 commented on 15,000+ AI-generated fake podcasts   kaggle.com/datasets/liste... · Posted by u/wenbin
egypturnash · a month ago
It's fake art made by pouring a bunch of images into an algorithmic hopper and spitting out something vaguely like them without paying a single cent to anyone whose data was grabbed for this abuse of fair use, at a huge cost of power hidden away from the user.
sram1337 · a month ago
I disagree its "fake art."

Criticize the process of creating it all you want.

sram1337 commented on 15,000+ AI-generated fake podcasts   kaggle.com/datasets/liste... · Posted by u/wenbin
sram1337 · a month ago
I take issue with the term "fake podcast." This is like calling AI generated art "fake art."

The issue they have is with low-quality content. If the AI generated content was better than most human-created podcasts and were making their engagement numbers go up, I doubt they would be calling them fake or removing them.

sram1337 commented on I used o3 to profile myself from my saved Pocket links   noperator.dev/posts/o3-po... · Posted by u/noperator
gorgoiler · 2 months ago
Interesting article. Bizarrely it makes me wish I’d used Pocket more! Tangentially, with LLMs I’m getting very tired with the standard patter one sees in their responses. You’ll recognize the general format of chatty output:

Platitude! Here’s a bunch of words that a normal human being would say followed by the main thrust of the response that two plus two is four. Here are some more words that plausibly sound human!

I realize that this is of course how it all actually works underneath — LLMs have to waffle their way to the point because of the nature of their training — but is there any hope to being able to post-process out the fluff? I want to distill down to an actual answer inside the inference engine itself, without having to use more language-corpus machinery to do so.

It’s like the age old problem of internet recipes. You want this:

  500g wheat flour
  280ml water
  10g salt
  10g yeast
But what you get is this:

  It was at the age of five, sitting
  on my grandmother’s lap in the
  cool autumn sun on West Virginia
  that I first tasted the perfect loaf…

sram1337 · 2 months ago
That is an issue with general use LLM apps like ChatGPT - they have to have wide appeal, so if you want replies that are differ from what the average user wants, you're going to have a bad time.

OpenAI has said they are working on making ChatGPT's output more configurable

sram1337 commented on SymbolicAI: A neuro-symbolic perspective on LLMs   github.com/ExtensityAI/sy... · Posted by u/futurisold
sram1337 · 2 months ago
This is the voodoo that excites me.

Examples I found interesting:

Semantic map lambdas

  S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])
  print(S.map('convert all fruits to vegetables'))
  # => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']

comparison parameterized by context

  # Contextual greeting comparison
  greeting = Symbol('Hello, good morning!')
  similar_greeting = 'Hi there, good day!'

  # Compare with specific greeting context
  result = greeting.equals(similar_greeting, context='greeting context')
  print(result) # => True

  # Compare with different contexts for nuanced evaluation
  formal_greeting = Symbol('Good morning, sir.')
  casual_greeting = 'Hey, what\'s up?'

  # Context-aware politeness comparison
  politeness_comparison = formal_greeting.equals(casual_greeting, context='politeness level')
  print(politeness_comparison) # => False
bitwise ops

  # Semantic logical conjunction - combining facts and rules
  horn_rule = Symbol('The horn only sounds on Sundays.', semantic=True)
  observation = Symbol('I hear the horn.')
  conclusion = horn_rule & observation # => Logical inference
`interpret()` seems powerful.

OP, what inspired you to make this? Where are you applying it? What has been your favorite use case so far?

u/sram1337

KarmaCake day137November 1, 2017View Original