Readit News logoReadit News
tdullien commented on Exit Tax: Leave Germany before your business gets big   eidel.io/exit-tax-leave-g... · Posted by u/olieidel
derriz · 21 days ago
It’s not as crazy as it initially seems.

It’s because of a fundamental difference between how capital gains tax and income tax are collected.

Capital gains are deferred - so as years pass you’re working up a tax liability but most countries recognize that forcing collection every year is not practical given the often illiquid nature of capital gains and the difficulty around valuation.

I’m from a country which has no exit tax on capital gains and notoriously a certain wealthy telecoms magnet - having been resident all his life - moved to Portugal just before realizing billions of capital gain. Thus despite earning multiple billions through businesses activities in his native country, he effectively paid zero tax.

I myself have benefited from this lack of capital gain exit tax as I moved to a country with very low capital gains tax. So despite the fact that my modest equity portfolio earned most of its growth while I was living in Ireland, when I sell, the Irish government will get nothing.

The problem, it seems to me is the method of valuation for the deemed disposal and/or the fact that it can cause a “liquidity squeeze“ for the tax payer.

I don’t see a simple solution - maybe other than getting rid of capital gains taxes completely and collecting more consumption taxes, for example, but I’m sure this would just open up a range of other tax evading loopholes.

tdullien · 21 days ago
The obvious solution is for the state to accept illiquid securities as payment for tax.
tdullien commented on Exit Tax: Leave Germany before your business gets big   eidel.io/exit-tax-leave-g... · Posted by u/olieidel
alephnerd · 21 days ago
Something I've noticed with German business law is that it is very much structured in such a way that if you aren't an incumbent player, you are essentially incentivized to be absorbed by them.

In the US we do have issues with businesses, but it's not like the Bosch, Thyssen, or Tschira family are any less unethical.

The level of hierarchy I've noticed in German firms and founders is insane to say the least. I'd love to do some quantitative research into this, but I haven't been in academia or policy for years now.

tdullien · 21 days ago
German here. I fully agree that German companies tend to be crazy hierarchical.
tdullien commented on Leonardo Chiariglione – Co-founder of MPEG   leonardo.chiariglione.org... · Posted by u/eggspurt
karel-3d · 22 days ago
I... don't understand how AI related to video codecs. Maybe because I don't understand either video codecs or AI on a deeper level.
tdullien · 22 days ago
Every predictor is a compressor, every compressor is a predictor.

If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there.

In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress.

The flip side of this is that all fields of compression have a lot to gain from progress in AI.

tdullien commented on EPA says it will eliminate its scientific research arm   nytimes.com/2025/07/18/cl... · Posted by u/anigbrowl
consumer451 · a month ago
One of the most onerous regulation regimes in the USA comes from the FAA.

When people question these regulations, and the cost of certifying aircraft and aircraft parts, someone always rightly responds "these regulations are written in blood."

The same can easily be said about environmental regulations, except in their case, the pool of blood is orders of magnitude deeper.

Do people really think that President Richard Nixon created the EPA to stick it to big business?

tdullien · a month ago
Thank you for pointing out that it was Nixon that created the EPA.
tdullien commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
Timwi · 2 months ago
If you read the literature on AI safety carefully (which uses the word “goal”), you'll find they're not talking about LLMs either.
tdullien · 2 months ago
I think the Anthropic "omg blackmail" article clearly talks about both LLMs and their "goals".
tdullien commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
BoorishBears · 2 months ago
You wrote this article and you're not familiar with hidden states?
tdullien · 2 months ago
I am not aware that an LLM contains any.
tdullien commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
lukeschlather · 2 months ago
Yes, strictly speaking, the model itself is stateless, but there are 600B parameters of state machine for frontier models that define which token to pick next. And that state machine is both incomprehensibly large and also of a similar magnitude in size to a human brain. (Probably, I'll grant it's possible it's smaller, but it's still quite large.)

I think my issue with the "don't anthropomorphize" is that it's unclear to me that the main difference between a human and an LLM isn't simply the inability for the LLM to rewrite its own model weights on the fly. (And I say "simply" but there's obviously nothing simple about it, and it might be possible already with current hardware, we just don't know how to do it.)

Even if we decide it is clearly different, this is still an incredibly large and dynamic system. "Stateless" or not, there's an incredible amount of state that is not comprehensible to me.

tdullien · 2 months ago
Fair, there is a lot that is incomprehensible to all of us. I wouldn't call it "state" as it's fixed, but that is a rather subtle point.

That said, would you anthropomorphize a meteorological simulation just because it contains lots and lots of constants that you don't understand well?

I'm pretty sure that recurrent dynamical systems pretty quickly become universal computers, but we are treating those that generate human language differently from others, and I don't quite see the difference.

tdullien commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
elliotto · 2 months ago
To claim that LLMs do not experience consciousness requires a model of how consciousness works. The author has not presented a model, and instead relied on emotive language leaning on the absurdity of the claim. I would say that any model one presents of consciousness often comes off as just as absurd as the claim that LLMs experience it. It's a great exercise to sit down and write out your own perspective on how consciousness works, to feel out where the holes are.

The author also claims that a function (R^n)^c -> (R^n)^c is dramatically different to the human experience of consciousness. Yet the author's text I am reading, and any information they can communicate to me, exists entirely in (R^n)^c.

tdullien · 2 months ago
Author here. What's the difference, in your perception, between an LLM and a large-scale meteorological simulation, if there is any?

If you're willing to ascribe the possibility of consciousness to any complex-enough computation of a recurrence equation (and hence to something like ... "earth"), I'm willing to agree that under that definition LLMs might be conscious. :)

tdullien commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
Timwi · 2 months ago
The author seems to want to label any discourse as “anthropomorphizing”. The word “goal” stood out to me: the author wants us to assume that we're anthropomorphizing as soon as we even so much as use the word “goal”. A simple breadth-first search that evaluates all chess boards and legal moves, but stops when it finds a checkmate for white and outputs the full decision tree, has a “goal”. There is no anthropomorphizing here, it's just using the word “goal” as a technical term. A hypothetical AGI with a goal like paperclip maximization is just a logical extension of the breadth-first search algorithm. Imagining such an AGI and describing it as having a goal isn't anthropomorphizing.
tdullien · 2 months ago
Author here. I am entirely ok with using "goal" in the context of an RL algorithm. If you read my article carefully, you'll find that I object to the use of "goal" in the context of LLMs.

u/tdullien

KarmaCake day932July 11, 2013View Original