What was the point of this judgmental comment?
What was the point of this judgmental comment?
---
"Within the last few weeks, Mark and I have built and launched Offline.Kids.
It’s a website to help parents reconnect with their kids and for kids to reconnect with the world around them.
Offline.Kids is directory of screen-free activities for all ages. Each activity is categorised so that parents can find appropriate activities for their situation.
For example, you can find:
quick, clean activities for a 6 year olds outdoor kids activities that take 1-2 hours low energy indoor crafts We built the site off the back of our new directory landing page plugin (catchy name still in progress!). It instantly creates thousands of SEO friendly landing pages for the activities. It’s early days, but Google is successfully indexing the pages and we’ll see how the rankings change over time.
So, if you’re looking for screen-free activities for your kids, check out the website, and share with anyone you think might find it useful!"
Criticize the process of creating it all you want.
The issue they have is with low-quality content. If the AI generated content was better than most human-created podcasts and were making their engagement numbers go up, I doubt they would be calling them fake or removing them.
Platitude! Here’s a bunch of words that a normal human being would say followed by the main thrust of the response that two plus two is four. Here are some more words that plausibly sound human!
I realize that this is of course how it all actually works underneath — LLMs have to waffle their way to the point because of the nature of their training — but is there any hope to being able to post-process out the fluff? I want to distill down to an actual answer inside the inference engine itself, without having to use more language-corpus machinery to do so.
It’s like the age old problem of internet recipes. You want this:
500g wheat flour
280ml water
10g salt
10g yeast
But what you get is this: It was at the age of five, sitting
on my grandmother’s lap in the
cool autumn sun on West Virginia
that I first tasted the perfect loaf…
OpenAI has said they are working on making ChatGPT's output more configurable
Examples I found interesting:
Semantic map lambdas
S = Symbol(['apple', 'banana', 'cherry', 'cat', 'dog'])
print(S.map('convert all fruits to vegetables'))
# => ['carrot', 'broccoli', 'spinach', 'cat', 'dog']
comparison parameterized by context # Contextual greeting comparison
greeting = Symbol('Hello, good morning!')
similar_greeting = 'Hi there, good day!'
# Compare with specific greeting context
result = greeting.equals(similar_greeting, context='greeting context')
print(result) # => True
# Compare with different contexts for nuanced evaluation
formal_greeting = Symbol('Good morning, sir.')
casual_greeting = 'Hey, what\'s up?'
# Context-aware politeness comparison
politeness_comparison = formal_greeting.equals(casual_greeting, context='politeness level')
print(politeness_comparison) # => False
bitwise ops # Semantic logical conjunction - combining facts and rules
horn_rule = Symbol('The horn only sounds on Sundays.', semantic=True)
observation = Symbol('I hear the horn.')
conclusion = horn_rule & observation # => Logical inference
`interpret()` seems powerful.OP, what inspired you to make this? Where are you applying it? What has been your favorite use case so far?
For me it’s no different from generating code with Claude, except it’s generating prose. Without human direction your result ends up as garbage, but there’s no need to go and actually write all the prose yourself.
And I guess that just like with code, sometimes you have to hand craft something to make it truly good. But that’s probably not true for 80% of the story/code.