I wonder how much psychologically we can be more confident and less anxious when we're doing something for others vs ourselves..
I wonder how much psychologically we can be more confident and less anxious when we're doing something for others vs ourselves..
Which one you pick will largely come down to personal morals or ethics.
The addictive properties of the drug and profits seem to make the responses of more mellow legal incentives -- inelastic. The addict does not give a shit he may go to jail. The high-level supplier does not care he might be risking life, when he is making a gazillion dollars and plans to go out shooting anyway. The low level suppliers, well they fall under the same problem as the 'addict' bucket because you can get as many as you need from that one, no matter the consequences.
That seems to be the real challenge with AI for this use case. It has no real critical thinking skills, so it's not really competent to choose reliable sources. So instead we're lowering the bar to just asking that the sources actually exist. I really hate that. We shouldn't be lowering intellectual standards to meet AI where it's at. These intellectual standards are important and hard-won, and we need to be demanding that AI be the one to rise to meet them.
Except that those free credits will go away and you'll find yourself not wanting to do all the work to move it over when it would've been easier to do so when you just had that first monolith server up.
I think free credits and hyped up technology is to blame. So, basically a gamed onboarding process that gets people to over-engineer and spend more.
This is a great way to kill a policy.
It would technically be most fair if every parent was given the same amount of money per child, period. Then they could do what they needed or wanted with it.
But doing so would not only increase the costs dramatically (by a multiple) it would give money to many parents who didn’t need it for child care.
That’s great in a hypothetical world where budgets are infinite, but in the real world they’re not. The more broadly you spread the money, the less benefit each person receives. If you extended an equal benefit to parents who were already okay with keeping their children home, it’s likely that the real outcome would be reduced benefits for everyone going to daycare. Now you’re giving checks to parents who were already doing okay at home but also diminished the childcare benefit for those who needed it, which was the goal in the beginning.
They do pay for it and it is expensive, but apparently it made a large reduction in child poverty, so that's a win.
From my understanding, it also reduced women in the workforce and reduced investment in childcare infrastructure since more mothers were then taking care of children at home.
So this is possible, it just depends on what you want to incentivize.
I'm tempted to use /s, but then again...
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
What does it mean to say that we humans act with intent? It means that we have some expectation or prediction about how our actions will effect the next thing, and choose our actions based on how much we like that effect. The ability to predict is fundamental to our ability to act intentionally.
So in my mind: even if you grant all the AI-naysayer's complaints about how LLMs aren't "actually" thinking, you can still believe that they will end up being a component in a system which actually "does" think.
Especially when modeling acting with intent. The ability to measure against past results and think of new innovative approaches seems like it may come from a system that may model first and then use LLM output. Basically something that has a foundation of tools rather than an LLM using MCP. Perhaps using LLMs to generate a response that humans like to read, but not in them coming up with the answer.
Either way, yes, its possible for a thinking system to use LLMs (and potentially humans piece together sentences in a similar way), but its also possible LLMs will be cast aside and a new approach will be used to create an AGI.
So for me: even if you are an AI-yeasayer, you can still believe that they won't be a component in an AGI.