Huh? The point I was trying to make that the returns seem to equalize the further back we go.
> It's a solid alternative if you distrust tech which is today very heavy in the SP500.
The only reason BRK kept up with SP500 over the last 10 years is because of its outsized investment in Apple in 2016 or so, after the disastrous results of sitting out of tech in the 2000s and early 2010s and pursuing other investments such as Kraft Heinz or whatever.
As of September 2025, Apple is still 21% of BRK's publicly listed holdings:
What % of AAPL is a highly-leveraged bet on AI, in comparison to those listed above? If you could only own 1 of those over the previous and incoming 10 years, it'd be challenging to not choose Apple, with maybe Google as second (albeit with a sizable regulatory asterisk).
I know there's a tendency to reduce everything to numbers, but Berkshire is playing a qualitatively completely different risk management game from the rest of the companies in the top 10 in the S&P 500 right now.
Edit: selfishly, I think you have more to gain from understanding why BRK chose to invest in Apple, than you do from aiming to "explain away" BRK as unremarkable.
If you're trying to choose where to lazily (i.e. with as little mental effort as possible) stash away your investments, that's a separate discussion. Buffet himself recommends S&P 500. But BRK is playing a fundamentally different game from the S&P. An investment in VOO vs an investment in BRK support very different theses.
My own personal financial history has been more damaged by actions taken, than by forbearance and waiting. "Time in the market beats timing the market", style. So I wait for the moment when I've got Something Better To Do with the money, and then I act. And try not to second-guess later.
One possible solution to my framework is (1) to only buy things for which you have a coherent theory (2) that coherent theory holds water over the duration of your investing life.
It's possible that you had a coherent theory for purchasing TSLA at a certain price. If that price has out-run your theory, there is no contradiction in selling that TSLA and parking it in some place you have a coherent theory for, like VOO. This is maturity and humility, not hubris.
If I were in your position, I'd try to ask myself: does this decision to rollover TSLA into something I better-understand (e.g. VOO) fall inside or outside the pattern of "my own personal financial history has been more damaged by actions taken".
FWIW, "Time in beats timing" is a truism that applies to market-spanning indices, not a choice between individual securities. It would be a mistake to apply that logic to individual positions, unless you've given yourself the arbitrary restriction of only buying or selling TSLA and nothing else!
I've been watching my investment accounts, particularly the TSLA fraction, and see-sawing between "This has got to collapse soon, I should..." and "You cannot time the market, idiot".
I'm dissatisfied with the inaction, but I can't come up with a coherent theory about how I should act... Bleah.
If you don’t have a coherent theory to {act}ively hold an asset, you probably shouldn’t be acting to hold it, and be acting to sell it.
This is equally true for TSLA and NVDA as it is for VOO and BRK.
Why? Because when prices are too low you need to be able to hold (and ideally buy) with conviction and when prices are too high you need to be able to sell with conviction, although the latter is less important if you have steady income.
If you don’t have a theory of value, you’ll be able to do neither effectively, and risk loss of principal and opportunity in a downturn.
But even then, you are right that that the moat of social cachet and implicit trust is still more valuable than the moat of technical implementation.
Systems that "work" tend to have some way of correcting for or mitigating the principal agent problem by aligning incentives.
I'd also point out that hardware is a much older discipline, in terms of how long it's been operating at scale. It's had more time to formalize and crystallize. Intel is 56 years old. Google is 27.
My only real current complaint is that the webhooks that are supposed to fire in repo activity have been a little flaky for us over the past 6-8 months. We have a pretty robust chatops system in play, so these things are highly noticeable to our team. It’s generally consistent, but we’ve had hooks fail to post to our systems on a few different occasions which forced us to chase up threads until we determined our operator ingestion service never even received the hooks.
That aside, we’re relatively happy customers.
They are pretty good, in my experience, at *eventually* delivering all updates. The outages take the form of a "pause" in delivery, every so often... maybe once every 5 weeks?
Usually the outages are pretty brief but sometimes it can be up to a few hours. Basically I'm unaware of any provider whose webhooks are as reliable as their primary API. If you're obsessive about maintaining SLAs around timely state, you can't really get around maintaining some sort of fall-back poll.
Strongly recommend giving that a try yourself. And trying to build an algorithm around it!
Here’s an example: Tolstoy really admired Turgenev, who was friends with Theodore Storm and Gustave Flaubert, and greatly admired Gogol.
If you like Anna Karenina you’ll probably find something of value in Torrents of Spring, Immensee, Madame Bovary or Dead Souls.
It spiders out pretty quickly!
As a result I spent hours wandering around through the site finding music I liked. I don't have the time to do that anymore, so what made the site wonderful back then (being forced to dig deep) just wouldn't work for me now. :(
I like the idea of "people with your listening/purchasing habits also purchase this". Or "people in your geo purchased this", or "here's the music of people performing in your area this weekend".
Spotify/Apple Music/etc. (the "streamers") have a very different incentive model from the Bandcamps of the world, because their income stream is super concentrated on the major labels and heavily tied to plays of that music in particular. So they're biased in favor of that "kind" of music in discovery.
They actually are averse to showing people hyper-niche music, which I think is why discovery is such a tricky problem for them to "solve": their salary depends on them not fully exploring the solution space.
I think moving out of the universe of royalty-based revenue is a huge step in the right direction for somebody trying to solve that problem at scale, even if it's a smaller market.
So the question becomes: is $0.002/minute a good price for this. I have never run GitHub Actions, so I am going to assume that experience on other, similar, systems applies.
So if your job takes an hour to build and run though all tests (a bit on the long side, but I have some tests that run for days), then you are going to pay GitHub $.12 for that run. You are probably going to pay significantly more for the compute for running that (especially if you are running on multiple testers simultaneously). So this does not seem to be too bad.
This is probably going to push a lot of people to invest more in parallelizing their workloads, and/or putting them on faster machines in order to reduce the number of minutes they are billed for.
I should note that if you are doing something similar in AWS using SMS (Systems Management Service), that I found that if you are running small jobs on lots of system that the AWS charges can add up very quickly. I had to abandon a monitoring system idea I had for our fleet (~800 systems) because the per-hit cost of just a monitoring ping was $1.84 (I needed a small mount of data from an on-worker process). Running that every 10 minutes was going to be more than $250/day. Writing/running my own monitoring system was much cheaper.
It makes sense to do usage-based pricing with a generously-sized free tier, which seems to be what they're doing? Offering the entire service for free at any scale would imply that you're "paying" for/subsidizing this orchestration elsewhere in your transactions with GitHub. This is more-transparent pricing.
Although, this puts downward pressure on orgs' willingness to pay such a large price for GH enterprise licenses, as this service was hitherto "implicitly" baked into that fee. I don't think the license fees are going to go down any time soon, though :P