You do need to start at 20% capitalization though, which at current prices means they need to own assets already (market risk), or have bonkers amounts of currency sitting around (inflation risk). I understand the FOMO some people must have.
I wonder if ultimately there is a cascade happening, where increased valuations lead previous owners to either take new mortgages, or sell their old house, and pay more for a new one. Which creates a cycle of ever increasing prices? I've never read anything though whether such an effect exists.
I mean basically where is there yield? Crypto + NFTs (yuck to the latter), real estate and equities. Even just running correlation analysis on these various assets over the last 20 years showed that crypto was super underpriced (despite only 12/13 year track record) while real estate is tremendously overvalued and equities were the only thing to react in a reasonable way to both (1) the initial realization covid was serious and (2) the Fed stepping in with 2 novel revolvers for SMB and for corporate credit. Treasury even threw in a bit into the bowl and performed stimulus. The last 1.5 years likely minted more "wealth" than in the previous 100 years combined (hand waving a bit) - a framework was laid to make much more money than what you put in provided you were looking at what was to come based on what was spoken.
care to elaborate?
IMO, there should be some kind of archive that conserves and publishes them after some time has passed, so that they could be ported to new hardware and kept accessible. and somehow documented for future historians.
https://www.sciencedirect.com/science/article/pii/S096098222...
I have many ideas and questions regarding your paper:
- How do you adjust weights between different spikes?
- Do you use or implement a kind of wavelet for wave-propagation, in example for spike interferences?
- What neuromorphic hardware can I buy to run your code/ the SNN?
=)
Current neuromorphic hardware is not easily accesible, but you can simulate spiking neural networks. Check out, e.g. https://brian2.readthedocs.io/en/stable/ or Nengo.ai
Empirically, you could have bought a share of the SPX at any point in time, and sold it with profit later. The real problem is what happened in the time between, and whether you were able to hold on.
> If you want to make progress in any area, you need to be willing to give up your best ideas from time to time. [...] Medawar notes that he twice spent two whole years trying to corroborate groundless hypotheses.
Unfortunately, being willing to scrap your idea is only one part of the equation. Securing funding after 2 "failed" post-docs is an entirely different matter.
Last week-end I was exploring the current possibilities of automated Ghidra analysis with Codex. My first attempt derailed quickly, but after giving it the pyghidra documentation, it reliably wrote Python scripts that would alter data types etc. exactly how I wanted, but based on fixed rules.
My next goal would be to incorporate LLM decisions into the process, e.g. let the LLM come up with a guess at a meaningful function name to make it easier to read, stuff like that. I made a skill for this functionality and let Codex plough through in agentic mode. I stopped it after a while as I was not sure what it was doing, and I didn't have more time to work on it since. I would need to do some sanity checks on the ones it has already renamed.
Would be curious what workflows others have already devised? Is MCP the way to go?
Is there a place where people discuss these things?