A very bad model that lacks accuracy and precision, yes. Maybe if you're a PhD quant at Citadel then you can create a very small statistical edge when gambling on an economic system. There's no analytic solution to complex economic systems in practice. It's just noise and various ways of validating efficient market hypothesis.
Also, because of heteroskedasticity and volatility clustering, using time-based bars (e.g. change over a fixed interval of time) is not ideal in modeling. Sampling with entropy bars like volume imbalance bars, instead of time bars, gives you superior statistical properties, since information arrives in the market at irregular times. Sampling by time is never the best way to simulate/gamble on a market. Information is the casual variable, not time. Some periods of time have very little information relative to other periods of time. In modeling, you want to smooth out information independently of time.
The thread was about economists, not quants.
> There's no analytic solution to complex economic systems in practice.
yes
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
The recommended approach has the advantage of separating information specific to Claude Code, but I think that in the long run, Anthropic will have to adopt the AGENTS.md format
Also, when using separate files, memories will be written to CLAUDE.md, and periodic triaging will be required: deciding what to leave there and what to move to AGENTS.md
Deleted Comment
For floating point there is the interesting property that 0 is signed due to its signed magnitude representation. Mathematically 0 is not signed but in floating point signed magnitude representation, "+0" is equivalent to lim x->0+ x and "-0" is equivalent to lim x->0- x.
This is the only situation where a floating point division by "zero" makes mathematical sense, where a finite number divided by a signed zero will return a signed +/-Inf, and a 0/0 will return a NaN.
Why should 0/0 return a NaN instead of Inf? Because lim x->0 4x/x = 4, NOT Inf.
I think the most pragmatic solution is to have 2 tiers:
1. use existing standards (i.e. IEEE 754 for FP, de-facto standards for integers, like two's complement, Big-Endian, etc.)
2. fast, native format per each compute device, using different sub-types so you will not be able to mix them in the same expression
(it is in principle possible to construct such a stack, potentially with more context, with a Result type, but I don't know of any way to do so that doesn't sacrifice a lot of performance because you're doing all the book-keeping even on caught errors where you don't use that information)
If you only need it for debugging, then maybe better instrumentation and observability is the answer.
Best practice is to set a long expiration date, such as 1-2 years. There are different regulations about it in different states. After that unused credits can be accounted as breakage revenue.
If a company treats credits as money, it will have to comply with numerous financial regulations. For example, if a company compensates for SLA breaches with cash rather than credits, this could be considered insurance.