Readit News logoReadit News
echollama commented on Engineered Addictions   masonyarbrough.substack.c... · Posted by u/echollama
nine_k · 2 months ago
> healthier relationship with our tools

The point is that these are not tools, they provide a direct kick, which is a goal in itself. Whisky is not a tool.

echollama · 2 months ago
agreed its like saying that jerking off makes you healthier
echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
superb_dev · 3 months ago
This site is impossible to read on my phone. Part of the left side of the screen is cut off and I can’t scroll it into view
echollama · 3 months ago
i fixed this
echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
multjoy · 3 months ago
Conversate is not a word.
echollama · 3 months ago
yes it is
echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
mgraczyk · 3 months ago
Why not have the model ask in the chat? It's a lot easier to just talk to it than open a file. The article mentions cursor so it sounds like you're already using cursor?
echollama · 3 months ago
would probably work better, this is just how i threw it together as an internal tool a long time ago. i just improved it and shipped it to opensource it.
echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
threeseed · 3 months ago
> an mcp server that lets the agent raise its hand instead of hallucinating

a) It doesn't know when it's hallucinating.

b) It can't provide you with any accurate confidence score for any answer.

c) Your library is still useful but any claim that you can make solutions more robust is a lie. Probably good enough to get into YC / raise VC though.

echollama · 3 months ago
reasoning models know when they are close to hallucinating because they are lacking context or understanding and know that they could solve this with a question.

this is a streamlined implementation of a interanlly scrapped together tool that i decided to open-source for people to either us or build off of.

echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
mgraczyk · 3 months ago
But then that means they are editing a markdown file on your computer? How is that meant to work?

I like the idea but would rather it use Slack or something if it's meant to ask anyone.

echollama · 3 months ago
this is mainly meant as a way to conversate with the model while you are programming with it. This is not meant to pull questions to a team but more to pair program. a markdown file is best for syntax in an llm prompt and also just easiest to have open and answer questions with. If i had more time and could i would build an extension into cursor.
echollama commented on Show HN: Ask-human-mcp – zero-config human-in-loop hatch to stop hallucinations   masonyarbrough.com/blog/a... · Posted by u/echollama
kjhughes · 3 months ago
Cool conceptually, but how exactly does the agent know when it's unsure or stuck?
echollama · 3 months ago
the reasoning aspect of most llms these days knows when its unsure or stuck, you can get that from its thinking tokens. It will see this mcp and call it when its in that state. Though this could benefit from some rules file to use it, although cursor doesn't quite follow ask for help rules, hence making this.

u/echollama

KarmaCake day305January 13, 2025
About
startup founder and ai engineer
View Original