Readit News logoReadit News
pollinations commented on AI in Search is driving more queries and higher quality clicks   blog.google/products/sear... · Posted by u/thm
andy99 · 20 days ago
Counterpoint from yesterday.

https://news.ycombinator.com/item?id=44798215

From that article

  Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.
But why listen to a third party when you can hear it from the horses mouth.

pollinations · 20 days ago
Whether you believe the the article or not, the point you posted seems orthogonal to what google is saying.

They're not claiming anything about the quality of AI summaries. They are analyzing how traffic to external sites has been affected.

pollinations commented on Qwen-Image: Crafting with native text rendering   qwenlm.github.io/blog/qwe... · Posted by u/meetpateltech
zippothrowaway · 22 days ago
You're probably going to have to wait a couple of days for 4 bit quantized versions to pop up. It's 20B parameters.
pollinations · 22 days ago

   # Configure NF4 quantization
   quant_config = PipelineQuantizationConfig(
       quant_backend="bitsandbytes_4bit",
       quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
       components_to_quantize=["transformer", "text_encoder"],
   )

   # Load the pipeline with NF4 quantization
   pipe = DiffusionPipeline.from_pretrained(
       model_name,
       quantization_config=quant_config,
       torch_dtype=torch.bfloat16,
       use_safetensors=True,
       low_cpu_mem_usage=True
   ).to(device)
seems to use 17gb of vram like this

update: doesn't work well. this approach seems to be recommended: https://github.com/QwenLM/Qwen-Image/pull/6/files

Deleted Comment

Deleted Comment

pollinations commented on I read all of Cloudflare's Claude-generated commits   maxemitchell.com/writings... · Posted by u/maxemitchell
SrslyJosh · 3 months ago
> Reading through these commits sparked an idea: what if we treated prompts as the actual source code? Imagine version control systems where you commit the prompts used to generate features rather than the resulting implementation.

Please god, no, never do this. For one thing, why would you not commit the generated source code when storage is essentially free? That seems insane for multiple reasons.

> When models inevitably improve, you could connect the latest version and regenerate the entire codebase with enhanced capability.

How would you know if the code was better or worse if it was never committed? How do you audit for security vulnerabilities or debug with no source code?

pollinations · 3 months ago
I'd say commit a comprehensive testing system with the prompts.

Prompts are in a sense what higher level programming languages were to assembly. Sure there is a crucial difference which is reproducibility. I could try and write down my thoughts why I think in the long run it won't be so problematic. I could be wrong of course.

I run https://pollinations.ai which servers over 4 million monthly active users quite reliably. It is mostly coded with AI. Since about a year there was no significant human commit. You can check the codebase. It's messy but not more messy than my codebases were pre-LLMs.

I think prompts + tests in code will be the medium-term solution. Humans will be spending more time testing different architecture ideas and be involved in reviewing and larger changes that involve significant changes to the tests.

pollinations commented on What If We Had Bigger Brains? Imagining Minds Beyond Ours   writings.stephenwolfram.c... · Posted by u/nsoonhui
bdbenton5255 · 3 months ago
There is a popular misconception that neural networks accurately model the human brain. It is more a metaphor for neurons than a complete physical simulation of the human brain.

There is also a popular misconception that LLMs are intelligently thinking programs. They are more like models that predict words and appear as a human intelligence.

That being said, it is certainly theoretically possible to simulate human intelligence and scale it up.

pollinations · 3 months ago
This reads pretty definitively. If LLMs are intelligently thinking programs is being actively debated in cognitive science and AI research.
pollinations commented on Show HN: My LLM CLI tool can run tools now, from Python code or plugins   simonwillison.net/2025/Ma... · Posted by u/simonw
xk_id · 3 months ago
The transition from assembly to C was to a different layer of abstraction within the same context of deterministic computation. The transition from programming to LLM prompting is to a qualitatively different context, because the process is no longer deterministic, nor debuggable. So your analogy fails to apply in a meaningful way to this situation.
pollinations · 3 months ago
Why isn't it debuggable?
pollinations commented on Recent results show that LLMs struggle with compositional tasks   quantamagazine.org/chatbo... · Posted by u/thm
mohsen1 · 7 months ago
With modified prompt to make less look like the original prompt it thought 6x more:

https://chatgpt.com/share/679f086d-a758-8007-b240-38e6843037...

pollinations · 7 months ago
I don't think individual examples make sense to solve these kinds of discussions as for me it can vary easily by 6x thinking with exactly the same input and parameters.
pollinations commented on OpenAI O3-Mini   openai.com/index/openai-o... · Posted by u/johnneville
esperent · 7 months ago
Claude uses Shadcn-ui extensively in the web interface, to the point where I think it's been trained to use it over other UI components.

So I think you got lucky and you're asking it to write using a very specific code library that it's good at, because it happens to use it for it's main userbase on the web chat interface.

I wonder if you were using a different component library, or using Svelte instead of React, would you still find Claude the best?

pollinations · 7 months ago
I was recently trying to write a relatively simple htmx service with Claude. I was surprised at how much worse it was when it's not React.

u/pollinations

KarmaCake day9January 7, 2025View Original