Readit News logoReadit News
13pixels commented on Show HN: Multi-attribute decision frameworks for tech purchases    · Posted by u/boundedreason
13pixels · 15 hours ago
This is a really interesting application of LLMs. The lack of "repeatable, traceable results" is indeed a huge issue for any serious use case (we see this constantly in enterprise adoption).

Have you found that forcing the LLM into a structured scoring framework reduces its tendency to hallucinate specs? Or does it just hallucinate the scores with more confidence?

Also, curious if you've tried different models for the "scoring" vs "reasoning" steps. We've found Claude is much better at adhering to complex constraints than GPT-4o for tasks like this.

13pixels commented on Show HN: GHOSTYPE – AI voice input that learns your writing style    · Posted by u/astnd
13pixels · 15 hours ago
The "Virtual Personality Engine" / style transfer is a killer feature. One of the biggest issues with LLM-generated content (whether for voice, text, or even brand answers) is the "generic AI accent" that just screams "I was generated by an LLM".

Curious how you handle the "style vector" update frequency? Does it continuously learn as I dictate more, or is it a static snapshot? For brand/personal voice consistency, continuous learning would be huge but might drift.

Also, +1 for the retro CRT aesthetic.

13pixels commented on Ask HN: How does ChatGPT decide which websites to recommend?    · Posted by u/nworley
13pixels · 15 hours ago
The shift to zero-click discovery is definitely real. We've been tracking this "AI visibility" metric internally too (using a mix of prompt injection and search monitoring) and found that brand mentions correlate strongly with structured data quality and entity clarity, rather than traditional backlinks.

It seems like LLMs prioritize "authoritative entities" over "keyword-optimized pages". For example, if you're cited in authoritative industry reports or have a clear Knowledge Graph entity, you're much more likely to be recommended.

One interesting thing we've noticed: different models have distinct "personalities" in recommendations. Perplexity leans heavily on recent news/citations, while ChatGPT seems to favor established, long-term authority sources.

(Disclosure: working on this problem at VectorGap)

13pixels commented on Eight more months of agents   crawshaw.io/blog/eight-mo... · Posted by u/arrowsmith
pjc50 · 2 days ago
> Right now every app feels like a walled garden, with broken UX, constant redesigns, enormous amounts of telemetry and user manipulation

OK, but: that's an economic situation.

> so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.

Right, so there's less profit in it.

To me it seems this will make the market more adversarial, not less. Increasing amounts of effort will be expended to prevent LLMs interacting with your software or web pages. Or in some cases exploit the user's agentic LLM to make a bad decision on their behalf.

13pixels · 2 days ago
the "exploit the user's agentic LLM" angle is underappreciated imo. we already see prompt injection attacks in the wild -- hidden text on web pages that tells the agent to do things the user didn't ask for. now scale that to every e-commerce site, every SaaS onboarding flow, every comparison page.

it's basically SEO all over again but worse, because the attack surface is the user's own decision-making proxy. at least with google you could see the search results and decide yourself. when your agent just picks a vendor for you based on what it "found," the incentive to manipulate that process is enormous.

we're going to need something like a trust layer between agents and the services they interact with. otherwise it's just an arms race between agent-facing dark patterns and whatever defenses the model providers build in.

13pixels commented on Eight more months of agents   crawshaw.io/blog/eight-mo... · Posted by u/arrowsmith
dmk · 2 days ago
The real insight buried in here is "build what programmers love and everyone will follow." If every user has an agent that can write code against your product, your API docs become your actual product. That's a massive shift.
13pixels · 2 days ago
This extends further than most people realize. If agents are the primary consumers of your product surface, then the entire discoverability layer shifts too. Right now Google indexes your marketing page -- soon the question is whether Claude or GPT can even find and correctly describe what your product does when a user asks.

We're already seeing this with search. Ask an LLM "what tools do X" and the answer depends heavily on structured data, citation patterns, and how well your docs/content map to the LLM's training. Companies with great API docs but zero presence in the training data just won't exist to these agents.

So it's not just "API docs = product" -- it's more like "machine-legible presence = existence." Which is a weird new SEO-like discipline that barely has a name yet.

Deleted Comment

13pixels commented on Show HN: I analyzed 5k comments to quantify the Jira vs. Linear sentiment gap   deltabrandcheck.com/battl... · Posted by u/13pixels
13pixels · 3 months ago
Hey HN, OP here.

I’ve been seeing the "Jira vs. Linear" holy war play out on my timeline for months. It usually boils down to anecdotal "Jira is slow" vs. "Linear doesn't scale" arguments.

I wanted to see if I could quantify that sentiment using actual data.

I built a Brand Alignment Engine (Delta) to scrape public sentiment (Reddit, G2, News) and measure the gap between a brand’s intended strategy and market reality.

The Tech Stack:

Frontend: Next.js 16 (App Router) + Tailwind v4

Backend: AdonisJS (TypeScript)

Search: Exa.ai (Neural Search to find the high-quality discussions)

Reasoning: Gemini Pro 3.0 (To synthesize the "Vibe Check" into a 0-100 score)

The Findings (Jira vs. Linear): The data actually surprised me.

Linear (Score 94): The sentiment is incredibly high on "Flow" and "Speed," but the analysis flagged a legitimate gap in "Enterprise Reporting."

Jira (Score 91): Higher than I expected. The market respects the "Utility" and "Scale," but the negative anchors are massive around "Complexity" and "Config."

The most interesting datapoint was Linear's new "Form Templates" feature—the AI flagged it as an 85% threat to Jira Service Desk specifically.

It’s a client-side MVP for now (well, mostly—backend handles the heavy lifting). I’d love feedback on the data visualization or the scraping logic.

Cheers!

u/13pixels

KarmaCake day2March 2, 2016View Original