Readit News logoReadit News
taw1285 commented on $50 PlanetScale Metal Is GA for Postgres   planetscale.com/blog/50-d... · Posted by u/ksec
taw1285 · a day ago
For the less experienced devs, how should I be thinking about choosing between this vs Amazon Aurora?
taw1285 commented on OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI   simonwillison.net/2025/De... · Posted by u/simonw
taw1285 · 4 days ago
Curious if anyone has applied this "Skills" mindset to how you build your tool calls for your LLM agents applications?

Say I have a CMS (I use a thin layer of Vercel AI SDK) and I want to let users interact with it via chat: tag a blog, add an entry, etc, should they be organized into discrete skill units like that? And how do we go about adding progressive discovery?

taw1285 commented on Cognitive and mental health correlates of short-form video use   psycnet.apa.org/fulltext/... · Posted by u/smartmic
taw1285 · a month ago
This tracks for me. I have deleted TikTok and Instagram but now I find myself browsing X short videos!! Addiction is a crazy thing.

I have a daily 30 minute one way commute. I usually put on a YouTube video about startup or tech talk. But I find myself forgetting it all the day after. I am curious how you go about remembering the content without being able to take notes while driving.

taw1285 commented on Notes on Managing ADHD   borretti.me/article/notes... · Posted by u/amrrs
taw1285 · 4 months ago
Thank you for this article. I have yet to discuss with my doctor about this. But I have noticed several issues that are severely lacking for me compared to my peers:

1. My brain drifts away very easily. Even in an important work conversation, my brain just starts thinking about a completely different project or upcoming meeting. 2. I have a hard time remembering things/events that my spouse and others can easily recall (ie: which restaurants we have been to) 3. I can't seem to form an opinion on very basic things like do you like restaurant A or restaurant B better? do you like option A or option B? I can't decide or come up with any heuristics.

At first I chalk it up to I am being too critical about myself and others are having the same issue. But that doesn't seem to be the case. Can these all be rolled up in the same conversation with my doctor?

taw1285 commented on We put a coding agent in a while loop   github.com/repomirrorhq/r... · Posted by u/sfarshid
taw1285 · 4 months ago
This is so amazing. Are there any resources or blogs on how people do this for production services? In my case, I need to rewrite a big chunk of my commerce stack from Ruby to Typescript.
taw1285 commented on Gemini Embedding: Powering RAG and context engineering   developers.googleblog.com... · Posted by u/simonpure
stillpointlab · 5 months ago
> Embeddings are crucial here, as they efficiently identify and integrate vital information—like documents, conversation history, and tool definitions—directly into a model's working memory.

I feel like I'm falling behind here, but can someone explain this to me?

My high-level view of embedding is that I send some text to the provider, they tokenize the text and then run it through some NN that spits out a vector of numbers of a particular size (looks to be variable in this case including 768, 1536 and 3072). I can then use those embeddings in places like a vector DB where I might want to do some kind of similarity search (e.g. cosine difference). I can also use them to do clustering on that similarity which can give me some classification capabilities.

But how does this translate to these things being "directly into a model's working memory'? My understanding is that with RAG I just throw a bunch of the embeddings into a vector DB as keys but the ultimate text I send in the context to the LLM is the source text that the keys represent. I don't actually send the embeddings themselves to the LLM.

So what is is marketing stuff about "directly into a model's working memory."? Is my mental view wrong?

taw1285 · 4 months ago
Your comment really helps me improve my mental model about LLM. Can someone smarter help me verify my understanding:

1) at the end of the day, we are still sending raw text over LLM as input to get output back as response.

2) RAG/Embedding is just a way to identify a "certain chunk" to be included in the LLM input so that you don't have to dump the entire ground truth document into LLM Let's take Everlaw for example: all of their legal docs are in embeddings format and RAG/tool call will retrieve relevant document to feed into LLM input.

So in that sense, what do these non-foundational models startups mean when they say they are training or fine tuning models? Where does the line end between inputting into LLM vs having them baked in model weights

taw1285 commented on Databricks in talks to acquire startup Neon for about $1B   upstartsmedia.com/p/scoop... · Posted by u/ko_pivot
taw1285 · 7 months ago
I am fairly new to all this data pipeline services (Databricks, Snowflakes etc).

Say right now I have an e-commerce site with 20K MAU. All metrics are going to Amplitude and we can use that to see DAU, retention, and purchase volume. At what point in my startup lifecycle do we need to enlist the services?

taw1285 commented on Ask HN: Has anyone quit their startup (VC-backed) over cofounder disagreements?    · Posted by u/stuck12345
nsypteras · 8 months ago
Cofounder splits are extremely common. Cofounder "couples" counselors are a thing you could look into to help resolve your differences. Your VC might have recommendations for one. If you ultimately decide to split, I'd recommend at least one of you (or "the company"?) getting a lawyer to draw up a formal separation agreement you both sign in order to split in the cleanest possible way.
taw1285 · 8 months ago
This is very interesting to me. From this thread: https://news.ycombinator.com/item?id=43472971, I am wondering if there are anecdotal stories of how equity is being handled after a split.

On one hand, if the leaving co-founder retains all equity, it creates a sandbagging situation on a cap table that's no longer useful to the business. On the other hand, it feels right for the leaving co-founder to enjoy some upside for the years they put in.

taw1285 commented on Write to Escape Your Default Setting   kupajo.com/write-to-escap... · Posted by u/kolyder
agentultra · 10 months ago
I heard once that, "Writing is thinking," which has stuck with me throughout my life.

You really haven't thought about it hard enough if you haven't tried writing it down.

I have a whole system of journals that I use to collect my thoughts across various subjects I dabble in. Algorithms: there's a journal for that. Abstract algebra? There's a journal for that. Etc.

At work? I use bullet journal... I add sections in for projects I'm working on. When I'm working on refactoring an old area of the code or investigating a hard-to-diagnose error I start writing. I ask questions, get answers, and I update my project journal. It helps me clarify the issue and I find once I can explain the system or the error clearly the answers (or how to find them) becomes obvious.

It may seem quaint, eccentric, or out-dated but it's a practical, reliable tool. Ask questions and write down the answers. Eventually a coherent narrative and a full thought will form before you.

taw1285 · 10 months ago
I want to get better at taking project notes for work via Obsidian. I'm curious if you have a different page per project or do you just put everything in the same giant log? I like the idea of organizing it, but it takes me a bit of time to find out which notebook it should go under.
taw1285 commented on Show HN: PurePlates – A Recipe Scraping iOS App   apps.apple.com/ca/app/pur... · Posted by u/CZubrecki
taw1285 · a year ago
Love it! It would be cool to be able to auto tag cuisine type. Did you use an LLM to scrape and parse receipt details?

u/taw1285

KarmaCake day11November 19, 2022View Original