I have a couple HEPA filters in my house that hopefully keep particulate exposure down. Does this mean that I have to run them longer? That I need more of them continuously running to keep exposure to VOCs low?
I have a couple HEPA filters in my house that hopefully keep particulate exposure down. Does this mean that I have to run them longer? That I need more of them continuously running to keep exposure to VOCs low?
According to Polymarket, there's a >50% chance that happens: https://polymarket.com/event/will-the-supreme-court-rule-in-...
* This person is wearing a suit, I'm going to charge double
* This is a regular that always buys the same thing every week, I can charge 30% more without breaking his routine
* This one is buying the ingredients for a recipe to do tonight, I can charge double more on one product because she won't want to go to another grocer just for one missing item.
Or in economic terms it is doing price discrimination to turn the consumer surplus into profit for itself. I think it's obvious why consumers wouldn't like that. Although they can also do "this one is a cheapstake with lots of free time, I have to offer a 20% discount to keep him coming"
When all you have is a hammer... It makes a lot of sense that a transformation layer that makes the tokens more semantically relevant will help optimize the entire network after it and increase the effective size of your context window. And one of the main immediate obstacle stopping those models from being intelligent is context window size.
On the other hand, the current models already cost something on the line of the median country GDP to train, and they are nowhere close to that in value. The saying that "if brute force didn't solve your problem, you didn't apply enough force" is intended to be listened as a joke.
https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)
Models are expensive, but they're not that expensive.
I am not sure if there exists a good tool for threaded discussions with multiple different focus areas - something like git but for conversations?
> Existing large language models (LLMs) rely on shallow safety alignment to reject malicious inputs
which allows them to defeat alignment by first providing an input with semantically opposite tokens for specific tokens that get noticed as harmful by the LLM, and then providing the actual desired input, which seems to bypass the RLHF.
What I don't understand is why _input_ is so important for RLHF - wouldn't the actual output be what you want to train against to prevent undesirable behavior?
No ads in TV programming. No product placement in movies. No billboards. No subway or bus station advertising posters. No paid recommending of specific products. No promotional material for products - nothing with fictional elements. No web ads. No sponsored links. No social media ads. No paid reviews.
(you could still do some of those covertly, with "under the table" money, but then if you caught you get fined or go to jail)
No tracking consuming preferences of any kind, not even if you have an online store. Just a database of past purchases on your own store - and using them for profiling via ML should be illegal too.
If people want to find out about a product, they can see it on your company's website (seeking it directly), or get a leaflet from you. In either case no dramatized / finctional / aspirational images or video should be shown.
>And in all cases, you are self-imposing a restriction that will give other nations an economic advantage and jeopardizing long-term sovereignty.
You're removing cancer.
My argument is this; even if the system itself becomes more complex, it might be worth it to make it better partitioned for human reasoning. I tend to quickly get overwhelmed and my memory is getting worse by the minute. It's a blessing for me with smaller services that I can reason about, predict consequences from, deeply understand. I can ignore everything else. When I have to deal with the infrastructure, I can focus on that alone. We also have better and more declarative tools for handling infrastructure compared to code. It's a blessing when 18 services doesn't use the same database and it's a blessing when 17 services isn't colocated in the same repository having dependencies that most people don't even identify as dependencies. Think law of leaky abstractions.