Readit News logoReadit News
justinlivi commented on Claude Code daily benchmarks for degradation tracking   marginlab.ai/trackers/cla... · Posted by u/qwesr123
conception · a month ago
I assume, after any compacting of the context window that the session is more or less useless at that point I’ve never had consistent results after compacting.
justinlivi · a month ago
Compacting equals death of the session in my process. I do everything I can to avoid hitting it. If I accidentally fly too close to the sun and compact I tend to revert and start fresh. As soon as it compacts it's basically useless
justinlivi commented on The Timmy Trap   jenson.org/timmy/... · Posted by u/metadat
stefanv · 7 months ago
What if the problem is not that we overestimate LLMs, but that we overestimate intelligence? Or to express the same idea for a more philosophically inclined audience, what if the real mistake isn’t in overestimating LLMs, but in overestimating intelligence itself by imagining it as something more than a web of patterns learned from past experiences and echoed back into the world?
justinlivi · 7 months ago
I think AI skeptics have a strong bias to assume that human intelligence fundamentally functions differently from LLMs. They may be correct, but we don't have a strong enough understanding of human cognition to make the claim in as uncertain terms as the skeptical argument is unusually made. The training methods between human learning and machine learning are obviously fundamentally vastly different as are the infrastructure-level mechanics. These elements are likely never going to align, though with time the machine infrastructure may start to increasingly resemble human bio hardware. I bring this up because these known vast differences may account for a significant portion of the differences in expected output from human and machine processing. We don't understand the fundamental conceptual "black box" portions of either form of processing well enough to state definitely what is similar or dissimilar about those hazy areas. Somewhere within that not-well-understood area is what we collectively have vaguely defined "intelligence." But also within that area are all the other aspects that both humans and now machines are quite good at - prediction, fluency, translation. The challenge of lexicon and definition is potentially as difficult a task as is sharpening the focus of our understanding of the hazy black-box portion of both machine processing as well as human processing. Until all those are better defined I don't think we have a good measure for answering the question of machine intelligence either way.
justinlivi commented on Show HN: LogoFox – fast logo maker   logofox.co/form/name?utm_... · Posted by u/alantrum
alantrum · 8 years ago
Hi, Alan from LogoFox here. I understand your concern regarding the IP terms.

I will make it straight and simple.

1) We use third-party icons from The Noun Project. We use their Pro API which gave us the right to use and sell the icons in part of the logos. Those icons are from thousand designers around the world. Normally when a designer uploads an icon on The Noun Project, they gave their IP. But, how can we be sure the icon uploaded is really their own creation? We can't.

2) We use hundreds of fonts. We check the license for all of them. But even with that, there is still a small risk of license infringement.

3) Thousand of logos are created every hour on LogoFox. Some will probably look similar to existing logos out there. For obvious reasons, we can't personally take the liability for the logo generated on the site. You have to make your due diligence.

Those 3 reasons mainly explain our current terms. This allows us to protect ourselves from any liability problems that may occur. Those liabilities also exist with a logo designed by a logo designer. The difference is, we don't deal with one logo a week but with thousand. So we adapted our terms accordingly. I hope you understand.

TLDR: no matter if your logo comes from a logo maker or a designer. You have to make your due diligence.

justinlivi · 8 years ago
The problem with this model is that if you aren't able to do due diligence yourself due to technical restrictions, how are your end users supposed to overcome the even greater technical restrictions on due diligence that using your service imposes?
justinlivi commented on Moral Machine   moralmachine.mit.edu/... · Posted by u/kevlar1818
function_seven · 9 years ago
Fun test to take, but seriously hope they're not drawing any conclusions from the mix of people I "preferred" to save or kill. I didn't consider the age, criminality, or gender of any of the pedestrians or occupants I killed or saved. I just erred toward non-intervention, unless the intervention choice saved bystanders at the expense of occupants. When the potential casualties were animals, they all died.
justinlivi · 9 years ago
I took the same sort of dispassionate approach, valuing the lives of the passengers above all else and staying the course otherwise. I was disappointed to discover the parsing of the results had no room for such methodology. Based my entirely algorithmic approach it was determined that I favored youth and fitness.
justinlivi commented on Unraveling Möbius strips of edge-case data   oreilly.com/ideas/unravel... · Posted by u/wallflower
justinlivi · 10 years ago
I'm curious to see how HIPAA regulations interact with this type of research. I would imagine it would be seriously limiting (though my actual knowledge of the laws is severely limited, so maybe not?)
justinlivi commented on Introduction to Metaprogramming in Nim   hookrace.net/blog/introdu... · Posted by u/vbit
justinlivi · 10 years ago
As someone who's never coded in either go or nim, this seems to me to be the exact opposite ideology of golang. The metaprogramming is very cool, but I imagine sharing a code base that utilizes it heavily is a nightmare.

u/justinlivi

KarmaCake day12July 19, 2015View Original