Readit News logoReadit News
wubrr commented on Streaming services are driving viewers back to piracy   theguardian.com/film/2025... · Posted by u/nemoniac
wubrr · 16 days ago
Your inability to provide a single example, which would immediately disprove my point, is the evidence.
wubrr commented on Streaming services are driving viewers back to piracy   theguardian.com/film/2025... · Posted by u/nemoniac
Gud · 16 days ago
Nothing dishonest with pirating. You wouldn’t download a car? Well I would.
wubrr · 16 days ago
I can't believe people still fall for the 'piracy bad' propaganda in 2025
wubrr commented on Streaming services are driving viewers back to piracy   theguardian.com/film/2025... · Posted by u/nemoniac
codedokode · 16 days ago
When billion dollar companies, which are praised and supported by governments, download pirated material and do not pay, why should ordinary people restrain themselves and pay? I cannot see how one can make moral arguments against piracy now. It makes no sense to pay if others are not paying and not punished for it. People also have a right to train their real neural network for free without paying.
wubrr · 16 days ago
There were never any good moral arguments against digital 'piracy' to begin with.
wubrr commented on How Silicon Valley can prove it is pro-family   thenewatlantis.com/public... · Posted by u/jger15
nickff · 17 days ago
Many companies encourage employees to go home and relax or engage in other rewarding activities; it can be very beneficial for the employer. For one thing, it encourages people to separate their work lives and home lives, which can decrease stress (often encouraging productivity and increasing tenure), as well as encouraging people to treat their office as somewhere to focus on work (to the exclusion of distractions). Additionally, in many fields it can be helpful to get a fresh perspective on your work every day, rather than getting tunnel-vision, which can happen from having your 'head down' all the time.
wubrr · 17 days ago
It sounds good and ostensibly makes sense, and many companies claim they do these things, just like many companies claim to have unlimited PTO.

How many actually sincerely follow through on these claims?

I've yet to encounter a single one.

wubrr commented on Ollama and gguf   github.com/ollama/ollama/... · Posted by u/indigodaddy
mangoman · 19 days ago
no that's incorrect - llama.cpp has support for providing a context free grammar while sampling and only samples tokens that would conform to the grammar, rather than sampling tokens that would violate the grammar
wubrr · 19 days ago
Very interesting, thank you!
wubrr commented on Ollama and gguf   github.com/ollama/ollama/... · Posted by u/indigodaddy
kristjansson · 19 days ago
and in fact leverages that control to constrain outputs to those matching user-specified BNFs

https://github.com/ggml-org/llama.cpp/tree/master/grammars

wubrr · 19 days ago
Very cool!
wubrr commented on Ollama and gguf   github.com/ollama/ollama/... · Posted by u/indigodaddy
tarruda · 19 days ago
The inference engine (llama.CPP) has full control over the possible tokens during inference. It can "force" the llm to output only valid tokens so that it produces valid json
wubrr · 19 days ago
Ahh, I stand corrected, very cool!
wubrr commented on Ollama and gguf   github.com/ollama/ollama/... · Posted by u/indigodaddy
tarruda · 19 days ago
I recently discovered that ollama no longer uses llama.cpp as a library, and instead they link to the low level library (ggml) which requires them to reinvent a lot of wheel for absolutely no benefit (if there's some benefit I'm missing, please let me know).

Even using llama.cpp as a library seems like an overkill for most use cases. Ollama could make its life much easier by spawning llama-server as a subprocess listening on a unix socket, and forward requests to it.

One thing I'm curious about: Does ollama support strict structured output or strict tool calls adhering to a json schema? Because it would be insane to rely on a server for agentic use unless your server can guarantee the model will only produce valid json. AFAIK this feature is implemented by llama.cpp, which they no longer use.

wubrr · 19 days ago
> Does ollama support strict structured output or strict tool calls adhering to a json schema?

As far as I understand this is generally not possible at the model level. Best you can do is wrap the call in a (non-llm) json schema validator, and emit an error json in case the llm output does not match the schema, which is what some APIs do for you, but not very complicated to do yourself.

Someone correct me if I'm wrong

wubrr commented on uBlock Origin Lite now available for Safari   apps.apple.com/app/ublock... · Posted by u/Jiahang
concinds · 25 days ago
People should be way more upset at the fact that Safari adblocking today is still inferior to even MV3 Google Chrome. Apple's implementation of declarativeNetRequest was semi-broken until the very latest iOS 18.6.

Apple can do the bare minimum, years after everyone else, and barely get called out. The Reality Distortion Field is the enemy.

Also funny that other devs had the gall to make people pay (sometimes subscriptions!) for Safari adblockers inferior to the free adblockers on any other browser.

wubrr · 25 days ago
Apple's software is generally low quality with more bugs and less features than equivalent linux/oss software. There is a long list of 5, 10 - year old, well-known bugs that apple simply ignores. They know their userbase is built off of marketing and 'design', not product quality.

> Also funny that other devs had the gall to make people pay (sometimes subscriptions!) for Safari adblockers inferior to the free adblockers on any other browser.

That's absolutely perfect, and fits into the typical apple fangirl pattern that can be readily seen on hackernews - pseudo-technical people promoting some closed cute-looking macos app that's just objectively worse existing OSS alternatives.

I find it analogous to when financially successful people in their mid-life crisis stage decide to buy a 'nice' car, while not having any interest in cars previously. They invariably seem to end up with the the most flashy/marketed car, even though that car is objectively worse than another car for half the price. They will extol the car's virtue in a way that sounds like they are literally reading off of a marketing brochure, and actual car people just laugh at them.

wubrr commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
captainregex · 25 days ago
I’m still trying to understand what is the biggest group of people that uses local AI (or will)? Students who don’t want to pay but somehow have the hardware? Devs who are price conscious and want free agentic coding?

Local, in my experience, can’t even pull data from an image without hallucinating (Qwen 2.5 VI in that example). Hopefully local/small models keep getting better and devices get better at running bigger ones

It feels like we do it because we can more than because it makes sense- which I am all for! I just wonder if i’m missing some kind of major use case all around me that justifies chaining together a bunch of mac studios or buying a really great graphics card. Tools like exo are cool and the idea of distributed compute is neat but what edge cases truly need it so badly that it’s worth all the effort?

wubrr · 25 days ago
If you're building any kind of product/service that uses AI/LLMs the answer is the same as why any company would want to run any other kind of OSS infra/service instead of relying on some closer proprietary vendor API.

  - Costs.
  - Rate limits.
  - Privacy.
  - Security.
  - Vendor lock-in.
  - Stability/backwards-compatibility.
  - Control.
  - Etc.

u/wubrr

KarmaCake day533September 8, 2023View Original