What's the actual state of these "ML compilers" currently, and what is rhe near term promise?
What's the actual state of these "ML compilers" currently, and what is rhe near term promise?
Bodybuilders do this thing called "bulking and cutting." The best way to add muscle fast is to overeat. Work out lots. Sleep lots. Eat lots.
You get fat, but you also get muscular because food is never a limiting factor.
Then, they lose the extra fat with a crash diet.
Google, FB and such are such money machines that they never have to cut. They can just bulk. The others... they want some of that rapid growth potential too, but can't afford to add fat forever.
Corporate bulking and cutting.
If it's words from Ilya, Sam, the board... the words are all about alignment, benefiting humanity and such.
Meanwhile, all parties involved are super serious tycoons who are super serious about riding the AI wave, establishing moats, monopolies and the next AdWords, azure, etc.
These are such extreme opposite vocabularies that it's just hard to bridge. It's two conversations happening under totally different assumptions and everyone assumes at least someone is being totally disingenuous.
Meanwhile, "AI alignment" is such a charismatic topic. Meanwhile, the less futuristic but more applicable, "alignment questions" are about the alignment of msft, openai, other investors and consortium members.
If Ilya, Sam or any of them are actually worried about si alignment... They should at least give credence to the idea that we're all worried about their human alignment.
One of those things that are widely true, but rarely admitted to. It seems to be very much a maturity thing. The older,larger and more governed an org is, the more likely such a pattern is.
Habits become precedents. Precedents become rules. A pattern emerges is everyone operates within rules. Staying within the ruleset, represents known safety. Even if something is dubious.. as long as it's within the rule set you are safe.
Is shaky principle tentatively applied once.. it doesn't have that kind of safety. That means it's less likely to be stretched and made absurd.
There is a logic to trimming unused budgets. not perfect, but it wouldn't surprise me if it worked well enough, often enough. If a department keeps going over budget, well.. they need more budget.. or maybe less work. Where is that budget going to come from? Departments without enough budget.
It's hard to get more legible, than last year's expenditure as the starting point for the next years budget. Nothing very notable about birthing this "principle."
I'm sure it makes sense, often. At least in the sense that it's the easiest, good enough method.
If there's a new management, using old methods is helpful. They don't know enough.. and this just gets the job done. If budgets become contentious, sticking to "principles" helps smooth things.
That is the point though.. whether it's a big-hype management method like agile.. or some unofficial budgetary principle that happened to work before these are principles. We like to be principled, especially when we don't really know what to do.
There's a literary trope of a bone casting seer. It takes a wise person to cast bones. It's an art and science. Sure, you have to know what all the bones mean. But, you also have to figure out it's a good idea to fight to fight this particular battle, build a town in that particular place... And also to understand the role bone casting plays in this particular case.
Bones must be cast, because we like external validation. It helps to bring everyone together, and calms underconfident, overwhelmed, or underunited leadership.
Knowing when to cast them, why, all the different implications.. how to define the question, how to approach the answer.. those are jobs for the seer, not the bones themselves.
Things are better when soldiers watch the stones, and chieftains watch the seer. If and when that flips, the paradigm is not at its best.
https://www.theverge.com/2021/8/4/22609150/sony-playstation-...
Consoles are often sold at a loss initially but quickly become profitable as hardware costs fall.
The razor-and-blades model isn't as ubiquitous as you might believe.
Sure, considering the platform-game paradigm different products sources cross subsidize one another a different stages of their life cycles. This is just how firms work, and is often mostly a matter of accounting.
It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.
Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."
Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.
There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.
However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.
It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.
FYIW I think all the big text have powerful plays available.. including keeping powder dry.
No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.
That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.
You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.
Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.
It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.
Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.
Also there were claims starting in the 50s where planarians were taught a maze or to respond to certain stimuli and then fed to other planarians and that the trait would sometimes transfer as well. I don't know if any of those studies were reproduced, thus the claims might be dubious, but it would be interesting if they were true.
[0] https://www.smithsonianmag.com/science-nature/these-decapita...
So the alternative to great man theory, in this case, is terrible man theory... I'm not following.
If focusing on control over openai, is great man theory... What's the contrary notion?
This is the literal answer to "why." Also bans on various casino hacks.