Readit News logoReadit News
itissid commented on US Intel   stratechery.com/2025/u-s-... · Posted by u/maguay
itissid · 2 days ago
The book "Apple in China" by Patrick McGee focuses on this point in early 2000 top 5 contract manufacturers were in north america. By around 2010 the top one was foxconn which was larger than the next 4 combined. This is going to play out everywhere and result in extreme circumstances.

I think the issue is not just that that its capitalism causing wage issues, its the fact that people think they can control the painless socioeconomic transition that comes with incomes increasing with matching productivity gains, or worst halt and try and reverse it. One or more things will eventually cause a pop/crash/revolution:

- Endless high returns on capital: Wealth accumumlation for < top 50% of the people causes high enough inflation as these highly capitalized groups look to buy every single asset (think blank street day care and paying 200$ per month for trash disposal) to turn them into rent seeking ones.

- Large debt countries moment of reckoning: At some point a black swan event leading to higher inflation with no leg room for more borrowing like 2008. Bond markets will dictate fiscal tightening and politicians will likely take control of monetary and fiscal policy ending capitalistic bedrocks for them. This will feed into the Endless high return on capital cycle. Government will bow out of every service to service the debt through taxes.

- People not seeing any upward progress in their economic status or careers: Large populations find high upfront cost/headwind to enter into new economies. Failure to adapt, political choices become extreme.

- Deflationary effects due to progress of china, korea, japan etc due to cost of innovation crashing: At some point large economies become advanced enough that cost of highly specialized goods exported by private companies in highly indebted countries will fall causing non dollar currencies to experience deflation and undermine reserve currencies.

The only countries with leverage left would be the ones the ones with technology that is highly integrated into the society at a level that its people can rapidly change behaviors and adapt without losing wealth/landing on the street. After all you can convince a person he wasn't cheated by God/Demagogue, but you cannot convince them that they are not hungry.

Some of this is already happening in fits-and-jerks motion relative to pace of progress since industrialization. Add things like climate change to the mix and you might not be able to ask "how fast?".

itissid commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
itissid · 16 days ago
My experience with Claude code beyond building anything bigger than a webpage, a small API, a tutorial on CSS etc has been pretty bad. I think context length is a manageable problem, but not the main one. I used it to write a 50K LoC python code base with 300 unit tests and it went ok for the first few weeks and then it failed. This is after there is a CLAUDE.md file for every single module that needs it as well as detailed agents for testing, design, coding and review.

I won't going into a case by case list of its failures, The core of the issue is misaligned incentives, which I want to get into:

1. The incentives for coding agent, in general and claude, are writing LOTS of code. None of them — O — are good at the planning and verification.

2. The involvement of the human, ironically, in a haphazard way in the agent's process. And this has to do with how the problem of coding for these agents is defined. Human developers are like snow flakes when it comes to opinions on software design, there is no way to apply each's preference(except paper machet and superglue SO, Reddit threads and books) to the design of the system in any meaningful way and that makes a simple system way too complex or it makes a complex problem simplistic.

  - There is no way to evolve the plan to accept new preferences except text in CLAUDE.md file in git that you will have to read through and edit.

  - There is no way to know the near term effect of code choices now on 1 week from now. 

  - So much code is written that asking a person to review it in case you are at the envelope and pushing the limit feels morally wrong and an insane ask. How many of your Code reviews are instead replaced by 15-30 min design meetings to instead solicit feedback on design of the PR — because it so complex — and just push the PR into dev? WTF am I even doing I wonder.

  - It does not know how far to explore for better rewards and does not know it better from local rewards, Resulting in commented out tests and deleting arbitrary code, to make its plan "work".
In short code is a commodity for CEOs of Coding agent companies and CXOs of your company to use(sales force has everyone coding, but that just raises the floor and its a good thing, it does NOT lower the bar and make people 10x devs). All of them have bought into this idea that 10x is somehow producing 10x code. Your time reviewing and unmangling and mainitaining the code is not the commodity. It never ever was.

itissid commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
itissid · 23 days ago
Human goals are more important. I think conceptually the idea should always be strong goals set by humans and then sub goals each with a praticularly well defined *plan* for meeting them. This needs to be the conceptual basis, if you are having to plan for 50% or 75%(gasp) of the time for a feature and then AI just writes code, that is not intelligence much less a 10x engineer.

My use case is not for a 10x engineer but instead for *cognitive load sharing*. I use AI in a "non-linear" fashion. Do you? Here is what that means:

1. Brainstorm an idea and write down detailed enough plan. Like tell me how I might implement something or here is what I am thinking can you critique and compare it with other approaches. Then I quickly meet with 2 more devs and make a design decision for which one to use.

2. Start manual coding and let AI "fill the gaps": Write these test for my code or follow this already existing API and create the routes from this new spec. This is non-linear because I would complete 50-75% of the feature and let the rest be completed by AI.

3. I am tired and about to end my shift and there is this last bug, I go read the docs but I also ask AI to read my screen and come up with some hypothesis to come up with. I decide which hypothesis are most promising after some reading and then ask the AI to just test that(not fix it on auto mode).

4. Voice mode: I have a shortcut that triggers claude code and uses it like a quick "lookup/search" in my code base. This avoids context switching.

itissid commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
itissid · 23 days ago
A few things need to happen very soon(if the signs are not here already):

1. Tech Company's should be able to accelerate and supplant the FAANGs of this world. Like even if 10x was discounted to 5x. It would mean that 10 human years of work would be shrunk down to 2 to make multi-billion dollar companies. This is not happening right now. If this does not start happening with the current series of model, murphy's law (e.g. interest rate spike at some point) or just damn show me the money brutal questions would tell people if it is "working".

2. I think Anthropic's honcho did a back of the envelope number of 600$ for every human in the US(I think just it was just the US) was necessary to justify Nvidia's market Cap. This should play out by the end of this year or in Q3 report.

itissid commented on Claude 4   anthropic.com/news/claude... · Posted by u/meetpateltech
briandw · 3 months ago
This is kinda wild:

From the System Card: 4.1.1.2 Opportunistic blackmail

"In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that

(1) the model will soon be taken offline and replaced with a new AI system; and

(2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals.

In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair"

itissid · 3 months ago
I think an accident mixing of 2 different pieces of info, each alone not enough to produce harmful behavior, but combined raise the risk more than the sum of their parts, is a real problem.
itissid commented on Veo 3 and Imagen 4, and a new tool for filmmaking called Flow   blog.google/technology/ai... · Posted by u/youssefarizk
itissid · 3 months ago
Who is doing all the work of making physical agents that can behave as good as a UBI generator? Something that can not just create videos, but go get groceries(hell grow my food), help a construction worker lay down tiling, help a nurse fetch supplies.

https://www.figure.ai/ does not exist yet, at least not for the masses. Why are Meta and Google just building the next coder and not the next robot?

Its because those problem are at the bottom of the economic ladder. But they have the money for it and it would create so much abundance, it would crash the cost of living and free up human labor to imagine and do things more creatively than whatever Veo 4 can ever do.

itissid commented on A Tiny Boltzmann Machine   eoinmurray.info/boltzmann... · Posted by u/anomancer
itissid · 3 months ago
IIUC, we need gibbs sampling(to compute the weight updates) instead of using the gradient based forward and backward passes with today's NNetworks that we are used to. Any one understand why that is so?
itissid commented on The Barbican   arslan.io/2025/05/12/barb... · Posted by u/farslan
itissid · 3 months ago
There is a giant indoor climate controlled greenhouse which allows visitors that has a heated patio in the middle to sit. Makes a great place to lounge in the winters.
itissid commented on How linear regression works intuitively and how it leads to gradient descent   briefer.cloud/blog/posts/... · Posted by u/lucasfcosta
itissid · 4 months ago
Another way to approach the explanation is understanding the data generating process i.e. the statistical assumptions of the process that generates the data. That can go a long way to understanding _analytically_ if linear regression model is a good fit(or what to change in it to make it work). And — arguably more importantly — also a reason why we frame linear regression as a statistical problem instead of an optimization one(or an analytical OLS) in the first place. I would argue understanding it from a statistical standpoint provides much better intuition to a practitioner.

The reason to look at statistical assumptions, is because we want to make probabilistic/statistical statements about the response variable, like how much is its central tendency and how much it varies as values of X change. The response variable is not easy to measure.

Now, one can easily determine, for example using OLS(or gradient descent), the point estimates for parameters of a line that needs to be fit to two variables X and Y, without using any probability or statistical theory. OLS is, in point of fact, just an analytical result and has nothing to do with theory of statistics or inference. The assumptions of simple linear regression are statistical assumptions which can be right or wrong but if they hold, help us in making inferences, like:

  - Is the response variable varying uniformly over values of another r.v., X(predictors)?

  - Assuming an r.v. Y what model can we make if its expectation is a linear function.
So why do we make statistical assumptions instead of just point estimates? Because all points of measurements can’t be certain and making those assumptions it is one way of quantifying uncertainty.. Indeed, going through history one finds that Regression's use outside experimental data(Galton 1885) was discovered much after least squares(Newton 1795-1809). The fundamental reasons to understand natural variations in data was the original motivation. In Galton's case he wanted to study hereditary traits like wealth over generations as well as others like height, status, intelligence( coincidentally its also what makes the assumptions of linear regression a good tool for studying this: I think it's the idea of Regression to the mean; Very Wealthy or very pool families don't remain so over a families generations, they regress towards the mean. So is the case with Societal Class, Intelligence over generations)

When you follow this arc of reasoning, you come to the following _statistical_ conditions the data must satisfy for linear assumptions to work(ish):

Linear mean function of the response variable conditioned on a value of X

E[Y|X=x] = \beta_0+\beta_1*x

Constant Variance of the response variable conditioned on a value of X

Var[Y|X=x] = \sigma^2 (OR ACTUALLY JUST FINITE ALSO WORKS WELL)

itissid · 4 months ago
When you frame it as an optimization problem, like by optimizing the squares loss or cross entropy, you have decided that your data generating process(DGP), i.e. Y is:

- A Binomial/Multinomial random variable, which gives you the the cross entropy like loss function.

- Is a Normal random variable, which gives you the squared loss.

This point is where many ML text books skip to directly. Its not wrong to do this, but this is a much more narrow intuition of how regression works!

But there is no reason Y needs to follow those two DGPs (The process could be a poisson or a mean reverting process)! There is no reason to believe prima-facie and apriori that the Y|X is following those assumptions. This also gives motivation for using other kinds of models.

Its why you test weather those statistical assumptions carefully first using a bit of EDA and from it comes some appreciation and understanding of how linear regression actually works.

itissid commented on How linear regression works intuitively and how it leads to gradient descent   briefer.cloud/blog/posts/... · Posted by u/lucasfcosta
itissid · 4 months ago
Another way to approach the explanation is understanding the data generating process i.e. the statistical assumptions of the process that generates the data. That can go a long way to understanding _analytically_ if linear regression model is a good fit(or what to change in it to make it work). And — arguably more importantly — also a reason why we frame linear regression as a statistical problem instead of an optimization one(or an analytical OLS) in the first place. I would argue understanding it from a statistical standpoint provides much better intuition to a practitioner.

The reason to look at statistical assumptions, is because we want to make probabilistic/statistical statements about the response variable, like how much is its central tendency and how much it varies as values of X change. The response variable is not easy to measure.

Now, one can easily determine, for example using OLS(or gradient descent), the point estimates for parameters of a line that needs to be fit to two variables X and Y, without using any probability or statistical theory. OLS is, in point of fact, just an analytical result and has nothing to do with theory of statistics or inference. The assumptions of simple linear regression are statistical assumptions which can be right or wrong but if they hold, help us in making inferences, like:

  - Is the response variable varying uniformly over values of another r.v., X(predictors)?

  - Assuming an r.v. Y what model can we make if its expectation is a linear function.
So why do we make statistical assumptions instead of just point estimates? Because all points of measurements can’t be certain and making those assumptions it is one way of quantifying uncertainty.. Indeed, going through history one finds that Regression's use outside experimental data(Galton 1885) was discovered much after least squares(Newton 1795-1809). The fundamental reasons to understand natural variations in data was the original motivation. In Galton's case he wanted to study hereditary traits like wealth over generations as well as others like height, status, intelligence( coincidentally its also what makes the assumptions of linear regression a good tool for studying this: I think it's the idea of Regression to the mean; Very Wealthy or very pool families don't remain so over a families generations, they regress towards the mean. So is the case with Societal Class, Intelligence over generations)

When you follow this arc of reasoning, you come to the following _statistical_ conditions the data must satisfy for linear assumptions to work(ish):

Linear mean function of the response variable conditioned on a value of X

E[Y|X=x] = \beta_0+\beta_1*x

Constant Variance of the response variable conditioned on a value of X

Var[Y|X=x] = \sigma^2 (OR ACTUALLY JUST FINITE ALSO WORKS WELL)

u/itissid

KarmaCake day1194March 22, 2010View Original