Readit News logoReadit News
LudwigNagasena commented on Uncle Sam shouldn't own Intel stock   wsj.com/opinion/uncle-sam... · Posted by u/aspenmayer
godelski · a day ago
Governments owning stocks feels like double dipping with a touch of socialism.

I get why the idea seems sane, you want to get returns on investments, right? But with governments the vehicle for returns is taxes and investments are often called "grants" (but also loans, credits, etc). The tax system means you can make investments and always get some return. FFS you could invest in a company going nowhere and you still recoup through taxes. The only way you lose in this system is by tanking the entire economy. Idk why this is so hard to understand. Why people think things like research grants is akin to tossing money into a pit and burning it.

I know the current party is anti tax but aren't they also anti nationalization (Words, not actions)? Changing (or adding) your returns vehicle to be through stock only creates nationalization. It increases returns but also puts an additional thumb on the scale. It's very short sighted.

Who knew socialism would come to America wrapped in an A̶m̶e̶r̶i̶c̶a̶n̶ capitalism flag?

LudwigNagasena · 14 hours ago
Governments make regulations that command how companies operate and take taxes from companies. Governments already own significant stocks of companies in all but name.

If you make targeted grants, rebates, subsidies, etc, it even seems more fair to make taxing also targeted.

And is giving Intel subsidies and making Intel pay for it more socialist than giving Intel subsidies and making everyone pay for it?

LudwigNagasena commented on MCP tools with dependent types   vlaaad.github.io/mcp-tool... · Posted by u/vlaaad
LudwigNagasena · 8 days ago
> there is no way to tell the AI agent “for this argument, look up a JSON schema using this other tool”

There is a description field, it seems sufficient for most cases. You can also dynamically change your tools using `listChanged` capability.

LudwigNagasena commented on Is a corporation a slave? Many philosophers think so   theconversation.com/is-a-... · Posted by u/hhs
xg15 · 9 days ago
Seemed more like hypercapitalist rhetoric to me. After all, if corporations are literal persons, then you could argue for the worst corpocracy as a "human rights" obligation, without even having to talk about wealth or economics as the usual justification.
LudwigNagasena · 9 days ago
If the author argued that it would be hypercapitalist rhetoric, but the author argued for worker cooperatives.
LudwigNagasena commented on Pfeilstorch   en.wikipedia.org/wiki/Pfe... · Posted by u/gyomu
xdennis · 9 days ago
The draft/promaja. In Eastern Europe people genuinely think that if you leave two windows open you'll get various diseases like cold/flu/headache/ear pain/etc.

I've tried to understand this belief. So if you stand outside and it's windy, that's perfectly fine. But if you're inside, and you open two windows, that's deadly, even if there's no draft to be felt. I think some people think it's even more deadly if you can't feel it.

https://www.reddit.com/r/skeptic/comments/1csstle/draft_myth...

LudwigNagasena · 9 days ago
Being cold weakens your immune system. Draft air increases heat loss. There is nothing complex to understand. Outside you would wear a scarf or other appropriate clothing to not feel cold.
LudwigNagasena commented on Is a corporation a slave? Many philosophers think so   theconversation.com/is-a-... · Posted by u/hhs
LudwigNagasena · 9 days ago
That looks like a weird exercise in anti-capitalist rhetoric. More of a performance rather than serious inquiry.

Deleted Comment

LudwigNagasena commented on Is chain-of-thought AI reasoning a mirage?   seangoedecke.com/real-rea... · Posted by u/ingve
pornel · 11 days ago
You're looking at this from the perspective of what would make sense for the model to produce. Unfortunately, what really dictates the design of the models is what we can train the models with (efficiently, at scale). The output is then roughly just the reverse of the training. We don't even want AI to be an "autocomplete", but we've got tons of text, and a relatively efficient method of training on all prefixes of a sentence at the same time.

There have been experiments with preserving embedding vectors of the tokens exactly without loss caused by round-tripping through text, but the results were "meh", presumably because it wasn't the input format the model was trained on.

It's conceivable that models trained on some vector "neuralese" that is completely separate from text would work better, but it's a catch 22 for training: the internal representations don't exist in a useful sense until the model is trained, so we don't have anything to feed into the models to make them use them. The internal representations also don't stay stable when the model is trained further.

LudwigNagasena · 10 days ago
It’s indeed a very tricky problem with no clear solution yet. But if someone finds a way to bootstrap it, it may be a new qualitative jump that may reverse the current trend of innovating ways to cut inference costs rather than improve models.
LudwigNagasena commented on Is chain-of-thought AI reasoning a mirage?   seangoedecke.com/real-rea... · Posted by u/ingve
limaoscarjuliet · 11 days ago
> In fact, I think in near future it will be the norm for MLLMs to "think" and "reason" without outputting a single "word".

It will be outputting something, as this is the only way it can get more compute - output a token, then all context + the next token is fed through the LLM again. It might not be presented to the user, but that's a different story.

LudwigNagasena · 10 days ago
That’s the only effective way to get more compute in current production LLMs, but the field is evolving.
LudwigNagasena commented on Is chain-of-thought AI reasoning a mirage?   seangoedecke.com/real-rea... · Posted by u/ingve
potsandpans · 11 days ago
> It is not a "philosophical" (by which the author probably meant "practically inconsequential") question.

I didn't take it that way. I suppose it depends on whether or not you believe philosophy is legitimate

LudwigNagasena · 10 days ago
The author called it “the least interesting question possible” and contraposed it with such questions as “how accurately it reflects the actual process going on.” I don’t see any other way to take it.
LudwigNagasena commented on Is chain-of-thought AI reasoning a mirage?   seangoedecke.com/real-rea... · Posted by u/ingve
AbrahamParangi · 11 days ago
You're not allowed to say that it's not reasoning without distinguishing what is reasoning. Absent a strict definition that the models fail and that some other reasoner passes, it is entirely philosophical.
LudwigNagasena · 10 days ago
I think it’s perfectly fine to discuss whether it’s reasoning without fully committing to any foundational theory of reasoning. There are practical things we expect from reasoning that we can operationalise.

If it’s truly reasoning, then it wouldn’t be able to deceive or to rationalize a leaked answer in a backwards fashion. Asking and answering those questions can help us understand how the research agendas for improving reasoning and improving alignment should be modified.

u/LudwigNagasena

KarmaCake day4726November 12, 2018View Original