Readit News logoReadit News
frostix commented on Ancient Stars Made Extraordinarily Heavy Elements (2023)   news.ncsu.edu/2023/12/anc... · Posted by u/samch
pjungwir · 2 years ago
Something I've been curious about lately is: why do we find elements on Earth clumped together, for example veins of iron or gold? I can understand coal or oil since that comes from something organic that put it there. But what about elemental substances? When I throw stuff into my blender or spice mixer it gets pretty homogeneous. Surely an exploding star ought to mix things up better than that. So is there something that brings the iron & gold back together again? I don't know if this is a question for an astrophysicist, chemist, or geologist, but I suppose HN has all three. :-)
frostix · 2 years ago
Most likely do to some process resulting in separation by density. You’d probably also need a geologist to contribute who understands previous states of earth. I’d hazard a guess that most of it occurred over time which much of earth was in a sort of fluid hot state and things clumped together. Structures you see today might be due to the surrounding materials that also tended to clump and how things cooled over time (slow, fast, etc.) There’s also the fact earth still has active convection going on heating things up, spinning them around, letting gravity pull, rising up, cooling, and then more complex motions like plate tectonic movement, fracture, etc. I suspect it would be pretty difficult to say exactly why any specific mineral deposits tend to follow the structures we find them in due to how complex the process is but someone may know.
frostix commented on SEC has not approved Bitcoin ETFs [fixed]   twitter.com/SECGov/status... · Posted by u/pawelduda
Eji1700 · 2 years ago
We really really really need some legislation about governmental agencies using privately owned companies to announce things.

There was already some for radio/early news, but the landscape has changed so much, and it bothers me a ton that these platforms are being used.

frostix · 2 years ago
I think we just need media literacy at least for now while the noise is still manageable. It’s perfectly fine to rely on private news to spread the information IMO, the issue is that people should independently verify said information.

After reading said announcement on Twitter, the first thing I’d do (if I cared about it) would be to head on over to sec.gov or use a search engine to find the official SEC site, then from navigate to find the official announcement. Any reputable news source should include a link in their announcement to the official announcement to save you this verification step.

At some point there may be so much targeted disinformation/misinformation out there that we need legislation to help protect against it but I don’t think we’re there yet.

frostix commented on Bad scientific code beats code following "best practices" (2014)   yosefk.com/blog/why-bad-s... · Posted by u/luu
jakobnissen · 2 years ago
I'm a scientist programmer working in a field comprised by biologists and computer scientists, and what I've experienced is almost exactly the opposite of the author.

I've found the problems that biologists cause are mostly:

* Not understanding dependencies, public/private, SCM or versioning, making their own code uninstallable after a few months

* Writing completely unreadable code, even to themselves, making it impossible to maintain. This means they always restart from zero, and projects grow into folders of a hundred individual scripts with no order, depending on files that no longer exists

* Foregoing any kind of testing or quality control, making real and nasty bugs rampant.

IMO the main issue with the software people in our field (of which I am one, even though I'm formally trained in biology) is that they are less interested in biology than in programming, so they are bad at choosing which scientific problems to solve. They are also less productive when coding than the scientists because they care too much about the quality of their work and not enough about getting shit done.

frostix · 2 years ago
>They are also less productive when coding than the scientists because they care too much about the quality of their work and not enough about getting shit done.

Ultimately I’d say the core issue here is that research is complex and those environments are often resource strapped relative to other environments. As such this idea of “getting shit done” takes priority over everything. To some degree it’s not that much different than startup business environments that favor shipping features over writing maintainable and well (or even partially) documented code.

The difference in research that many fail to grasp is that the code is often as ephemeral as the specific exploratory path of research it’s tied to. Sometimes software in research is more general purpose but more often it’s tightly coupled to a new idea deep seated in some theory in some fashion. Just as exploration paths into the unknown are rapidly explored and often discarded, much of the work around them is as well, including software.

When you combine that understanding with an already resource strapped environment, it shouldn’t be surprising at all that much work done around the science, be it some physical apparatus or something virtual like code is duct taped together and barely functional. To some degree that’s by design, it’s choosing where you focus your limited resources which is to explore and test and idea.

Software very rarely is the end goal, just like in business. The exception with business is that if the software is viewed as a long term asset more time is spent trying to reduce long term costs. In research and science if something is very successful and becomes mature enough that it’s expected to remain around for awhile, more mature code bases often emerge. Even then there’s not a lot of money out there to create that stuff, but it does happen, but only after it’s proven to be worth the time investment.

frostix commented on Cleaning up my 200GB iCloud with some JavaScript   andykong.org/blog/icloudc... · Posted by u/amin
mft_ · 2 years ago
Okay, so it could be a bug, but it is possible that iCloud is secretly storing more than one version of the file in some cases? (After all, Apple does similar things with other media files.)

The example given at the end is interesting:

> So iCloud says the video is 128MB, I download it and the video is actually 48MB, and my free storage increases by ~170MB when I deleted it. Interesting!

This suggests that iCloud isn't simply misrepresenting the size of the example file, as then you'd expect that deleting the 128MB file would clear ~128MB of iCloud space. Instead, the deletion clears roughly the space it reports (128MB) plus the space of the downloaded version (48MB): 128MB + 48MB = 176 MB - which might be close enough, allowing for rounding errors, as iCloud reports the free space (from the article's example) to the nearest 10 MB.

frostix · 2 years ago
Differential backups or any sort of versioning seemed like one of the most obvious culprits (that and or total redundant storage to preserve the file) but the issue with all of this is it’s entirely opaque.

Ultimately you’re increasingly tethered to some service for your storage that you pay for periodically based on total storage yet you have little-to-no information how to best optimize that storage if you want to operate in a fixed cost bracket or lower storage/cost ratio. So as a consumer, do I just wave my hands and keep throwing more and more money at the problem, especially now that devices are increasingly pushing everything, including storage, as a subscription service to meet my actual functional needs (that realistically could be met by local storage options if manufacturers didn’t have a vested interest in pushing me towards service based storage solutions)?

The modern business strategy in technology is simply hiding behind complexity. The cost is too complex for you to understand, it gives too much information away about our internals to competitors, and so on. Yet somehow these metrics are derived to assure the business is operating above cost because when the rubber meets the road it must be done, yet when the consumer wants to understand it’s suddenly too complex. The problem is that tech in many cases is growing to scales that really is too complex and business managers know this, so it’s often a valid excuse to hide behind. Conveniently that’s where they focus on investment and padding margins though.

frostix commented on Possible Meissner effect near room temperature: copper-substituted lead apatite   arxiv.org/abs/2401.00999... · Posted by u/zaikunzhang
hypercube33 · 2 years ago
One would think with all of our crazy AI and supercomputers and quantum computers that a team would give it evolutionary goals of just trying simulations of molecular combinations to reach superconductivity. Sure, it'd be one thing to make it in a computer, and making the materials in the real world is quite another but I'm kind of shocked no one has come forward with something yet. I saw simulations of whole viruses running on a cluster of computers where they test drugs out and how they interact with the virus and simulated human cells so one would think its something with enough effort would be possible?
frostix · 2 years ago
This is done across many disciplines to try and aide in new discovery paths. Typically you’re limited in exactly what you can simulate and often times solution candidates may be found that are impractical, currently impossible, or perhaps actually impossible to produce. Sometimes you can add search constraints to tie simulations together to narrow down such false positive solutions found but not always. Heck in some cases it’s literally cheaper and more accurate to do the bench science no matter how alluring virtualized renditions may be.

Most fields are still left with piles and piles of potential solutions to sort through. They often select candidates that are the cheapest and most practical to approach or they have high suspicion of success and pursue those. At the end of the day though we don’t have full universe simulators at every scale we’d want, we have very specific area simulators within very specific bounds. You have to go out an empirically test these things.

But this is and has already been going on for decades across most disciplines I’ve interacted with, they just weren’t using DNN or LLMs at the time but domains are adopting these as well to leverage where feasible in the search process.

I work with a variety of people interested in leveraging simulation and everyone wants to take the successes they see in LLMs or say RL from AlphaStar or AlphaGo and apply them in their domain. It’s alluring, I get it, the issue is that we often lack enough real understanding in domains and the science isn’t as airtight and people think it is, its too general or narrow, or on some cases we have good suspicion of how to build better more accurate simulations but there’s not enough compute power or energy in the world to make them currently practical, so we need to take some tradeoffs and live with less accurate and detailed simulation which leads to inaccurate representations of reality and ultimately inaccurate solution suggestion candidates.

frostix commented on Generative AI flooding online crocheting spaces with unrealistic amigurumi pics   twitter.com/LauraRbnsn/st... · Posted by u/YeGoblynQueenne
politelemon · 2 years ago
What do the people posting the fakes, get out of it?
frostix · 2 years ago
The issue with generative AI techniques in general is how low the barrier to entry is. Various forms of information that used to be difficult or resource intensive to create have suddenly become approachable and even trivial in terms of resource investment to create.

Overall, in any sort of cost/benefit analysis, the cost is just so low now the benefits don’t have to be much of anything, if anything at all. Entertainment factor alone, boredom, or perhaps a passing curiosity to try something are enough to create and present false or misleading information and push it out to the public, creating noise needed to filter through. There are plenty of other far stronger motives that make the problem even worse.

Misinformation and disinformation were already becoming an increasingly large societal issue IMHO. That is only going to get worse with wide access to generative AI. We already have a high degree of erosion in social trust where we pretty much have to consider motives and driving forces behind every transactional relationship we have these days and we could at least use costs to help sort that mess out: why would someone bother investing the resources to do this? Does it cost a lot to present me with false information and if so, is there enough potential motive behind that to make this information more likely to be false or misleading?

The answer to this is increasingly yes. It’s now far more difficult to start from a position of distrust and move to a point of trust or likelihood of trust and I think we’re going to see that even more in all sorts of aspects of daily life. I now have to assume most pieces of information out there are targeting me and attempting to manipulate me in some way (more than before). I fear we’re moving to a model of free speech that will put more weight on “authoritative” sources more so than in recent past in many cases considering liabilities authorities have when presenting false, misleading, or inaccurate information. Liabilities that in many cases aren’t real liabilities just perceived liabilities, granting authoritative information sources far more credit than is due.

u/frostix

KarmaCake day33June 4, 2018View Original