Readit News logoReadit News
Frannky commented on How to effectively write quality code with AI   heidenstedt.org/posts/202... · Posted by u/i5heu
Frannky · 2 days ago
I want to give a try to gsd + open code + Cerebras code. Any experience?
Frannky commented on Agent Skills   agentskills.io/home... · Posted by u/mooreds
empath75 · 6 days ago
They're basically all trade-offs between context-size/token-use and flexibility. If you can write a bash or a python script, or an api or an MCP to do what you want, then write a bash or python script to do it. You can even include it in the skill.

My general design principle for agents, is that the top level context (ie claude.md, etc) is primarily "information about information", a list of skills, mcps, etc, a very general overview, and a limited amount of information that they always need to have with every request. Everything more specific is in a skill, which is mostly some very light touch instructions for how to use various tools we have (scripts, apis and mcps).

I have found that people very often add _way_ to much information into claude.md's and skills. Claude knows a lot of stuff already! Keep your information to things specific whatever you are working on that it doesn't already know. If your internal processes and house style are super complicated to explain to claude and it keeps making mistakes, you might want to adapt to claude instead of the other way around. Claude itself makes this mistake! If you ask it to build a claude md, it'll often fill it with extraneous stuff that it already knows. You should regularly trim it.

Frannky · 6 days ago
Thanks, super useful!
Frannky commented on Agent Skills   agentskills.io/home... · Posted by u/mooreds
replwoacause · 6 days ago
Are you spending a fortune on running OpenClaw?
Frannky · 6 days ago
It's free with qwen oauth
Frannky commented on Agent Skills   agentskills.io/home... · Posted by u/mooreds
Frannky · 6 days ago
I started playing with skills yesterday. I'm not sure if it's just easier for the LLM to call APIs inside the skill — and then move the heavier code behind an endpoint that the agent can call instead.

I have a feeling that otherwise it becomes too messy for agents to reliably handle a lot of complex stuff.

For example, I have OpenClaw automatically looking for trending papers, turning them into fun stories, and then sending me the text via Telegram so I can listen to it in the ElevenLabs app.

I'm not sure whether it's better to have the story-generating system behind an API or to code it as a skill — especially since OpenClaw already does a lot of other stuff for me.

Frannky commented on Clawdbot - open source personal AI assistant   github.com/clawdbot/clawd... · Posted by u/KuzeyAbi
Frannky · 13 days ago
It seems cool! How to use it for free with acceptable quality? Also what are the alternative for a personal assistant that remember stuff automatically and message you about it?
Frannky commented on Why I Disappeared – My week with minimal internet in a remote island chain   kenklippenstein.com/p/why... · Posted by u/eh_why_not
Frannky · a month ago
Electronic devices are very effective distraction tools, especially phones. Companies and apps leverage our psychology and biology to get our attention, but we can take control of what we interact with—and if we remove the hooks, they won't be able to exploit them anymore.

What could help is taking control of how devices interact with us, rather than letting other people control that. This includes deciding which apps can be installed, how often they can notify or distract us, and so on.

A very basic step is using an app blocker. The ideal solution would be a phone with a local AI that is aligned with my personal preferences and instructions.

For example, it could deliver news just once a week from outlets across the entire political spectrum, eliminate social media entirely, and surface only important emails and messages at the most appropriate times.

Frannky commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
Frannky · 2 months ago
I see LLMs as searchers with the ability to change the data a little and stay in a valid space. If you think of them like searchers, it becomes automatic to make the search easy (small context, small precise questions), and you won't keep trying again and again if the code isn't working(no data in the training). Also, you will realize that if a language is not well represented in the training data, they may not work well.

The more specific and concise you are, the easier it will be for the searcher. Also, the less modification, the better, because the more you try to move away from the data in the training set, the higher the probability of errors.

I would do it like this:

1. Open the project in Zed 2. Add the Gemini CLI, Qwen code, or Claude to the agent system (use Gemini or Qwen if you want to do it for free, or Claude if you want to pay for it) 3. Ask it to correct a file (if the files are huge, it might be better to split them first) 4. Test if it works 5. If not, try feeding the file and the request to Grok or Gemini 3 Chat 6. If nothing works, do it manually

If instead you want to start something new, one-shot prompting can work pretty well, even for large tasks, if the data is in the training set. Ultimately, I see LLMs as a way to legally copy the code of other coders more than anything else

Frannky commented on I got an Nvidia GH200 server for €7.5k on Reddit and converted it to a desktop   dnhkng.github.io/posts/ho... · Posted by u/dnhkng
Frannky · 2 months ago
Wow! Kudos for thinking it was possible and making it happen. I was wondering how long it would be before big local models were possible under 10k—pretty impressive. Qwen3-235B can do mundane chat, coding, and agentic tasks pretty well.
Frannky commented on Vibe coding: Empowering and imprisoning   anildash.com/2025/12/02/v... · Posted by u/zdw
Frannky · 2 months ago
I am not sure I am doing it the right way or if there's a right way, but what I do is generate small files one at a time, with a functional system of stateless input/output. That way I can focus first on architecture, then on stateless input/output Lego blocks, and just ask the AI to generate the Lego blocks. Like this, everything is easy to keep in mind and update and change after experiencing the tech and having emotions telling me what to change. It has been working great for me. It strikes a good balance between simple and robust, fast to build, plus easy to update. The other thing is that since I haven't spent too much energy on writing the code, it is very easy to detach emotionally from it when reality gives me new info that requires throwing out part of the system.

u/Frannky

KarmaCake day54September 13, 2025View Original