I did the implementation in JS to help other people reading the paper, and tried to keep everything as similar as possible to the pseudocode from just this single paper. Maybe it would be cool to integrate an additional grid on which incompressibility is enforced better but I didn't want to make the source confusing.
It is also a little difficult to do density ratios with just what is shown in the paper, here the masses are set to (1, .8, .6, .4). This is what causes the lightest particles to get launched so violently in the air. Probably would be useful to integrate some ideas from the paper Density Contrast SPH Interfaces (Barbara Solenthaler, Renato Pajarola).
I started revisiting SPH because I have some new ideas to combine it with an MPM/FLIP grid for closeups. I'm trying to do a multi-scale MPM simulation that can handle better surface tension droplets in closeups while also doing extremely large scale scenes. You can see some larger scale parallel sims on my YouTube: https://www.youtube.com/c/GrantKot
Short clip of my implementation here: https://imgur.com/a/2IARiBq
I like the different fluids in your sim though, I may try that myself.
The entire thing fell out of plane (phone + charging cable plugged), then landed in a tree, where the charging cable got tangled in branches and that's when phone broke out of it and fell in grass.
So phone was able to release kinetic energy in 2 big events (+a few branches hit maybe), not a direct splash on the ground.
Wonder if somehow they can analyze the accelerometer data of the phone and figure out if that correlates with that scenario.
I think dirt/grass is just a lot softer than the things we usually drop our phones over, like concrete or tile.
https://youtu.be/rSKMYc1CQHE?si=pXdsHlQSCpw8nY8m
The GitHub repository also contains links to some of the research papers used to implement the simulation.
For starters, the way he’s doing the spatial lookup has poor cache performance, each neighbor lookup is another scattered read. Instead of rearranging an array of indices when doing the sort, just rearrange the particle values themselves. That way you're doing sequential reads for each grid cell you look for neighbors in, instead of a series of scattered reads. The performance improvement I got was about 2x, which was pretty impressive for such a simple change.
The sorting algorithm used isn’t the fastest, counting sort had much better performance for me and was simpler for me to conceptualize. It involves doing a prefix sum though, which is easy to do sequentially on the CPU but more of a challenge if you want to try keeping it on the GPU. "Fast Fixed-Radius Nearest Neighbors: Interactive Million-Particle Fluids", by Hoetzlein et al [0].
Or, if you want to keep using bitonic sort, you can take advantage of threadgroup memory to act as a sort of workspace during bitonic merge steps that are working on small enough chunks of memory. The threadgroup memory is located on the GPU die, so it has better read/write performance.
I ended up converting his pure SPH implementation to use PBF ("Position Based Fluids", Macklin et al, [1]), which is still SPH-based but maintains constant density using a density constraint solver instead of a pressure force. It seems to squeeze more stability out of each “iteration” (for SPH that’s breaking up a single frame into multiple substeps, but with PBF you can also run more iterations of the constraint solver). It’s also a whole lot less “bouncy”. One note: I had to multiply the position updates by a stiffness factor (about 0.1 in my case) to get stability, the paper doesn’t talk about this so maybe I’m doing something wrong.
The PBF paper talks about doing vorticity confinement. It’s implemented exactly as stated in the paper but I struggled for a bit to realize I could still do this in 2D. You just have to recognize that while the first cross product produces the signed magnitude of a vector pointing out of the screen, the second cross product will produce a 2D vector in the same plane as the screen. So there’s no funny business in 2D like I had originally thought. Though, you can skip vorticity confinement, the changes aren't very significant.
There’s a better (maybe a bit more expensive) method of doing surface tension/avoiding particle clustering. It behaves a lot more like fluids in real life do and avoids the “tendril-y” behavior he mentions in the video. "Versatile surface tension and adhesion for SPH fluids" by Akinci et al [2].
One of the comments on Sebastian's video mentions that doing density kernel corrections using Shepard interpolation should improve the fluid surface. I searched and found this method in a bunch of papers, including "Consistent Shepard Interpolation for SPH-Based Fluid Animation" by Reinhardt et al, [3] (I never implemented the full solution that paper proposes, though). There's kernel corrections, and then there's kernel gradient corrections, which I never got working. With the kernel corrections alone, the surface of the fluid seems to "bunch up" less when it moves, and it was pretty simple to implement. Otherwise, the surface looks a bit like a slinky or crinkling paper with particles being pushed out from the surface boundary.
I found [0] and [1] on my own but I found [2] through a thesis, "Real-time Interactive Simulation of Diverse Particle-Based Fluids" by Niall Tessier-Lavigne [4]. I also use the 2nd order integration step formula from that paper. It has some other excellent ideas that are worth trying.
Many years ago I used a paper (that is in fact one referenced by Sebastian’s video) and some C sample code I found to write an SPH simulator in OpenCL. I had been wanting to write one again but this time get a real understanding of the underlying mathematics now that I have some more tools under my belt. I owe it to Sebastian that I finally started on my implementation and I understand SPH a lot more now.
[0]: https://on-demand.gputechconf.com/gtc/2014/presentations/S41...
[1]: https://mmacklin.com/pbf_sig_preprint.pdf
[2]: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
[3]: https://www.hdm-stuttgart.de/hochschule/forschung/forschungs...
[4]: https://project-archive.inf.ed.ac.uk/ug4/20181074/ug4_proj.p...
I previously worked at a video relay service company (VRS in the US is a service paid for by the TRS fund through the FCC, and allows Deaf and hard-of-hearing people to make phone calls through video-chat with a ASL interpreter). In written English-based interactions with Deaf and hard-of-hearing colleagues, there is often a communication barrier as there often is with anyone speaking a second language.
In my personal experience working on tickets written by D/deaf colleagues, while sometimes we could communicate by whiteboard or text-based chat, it was indispensable to have the option to discuss the ticket with someone who could interpret present.
For example, let’s take Apple. Apple Passkey support is great - I can store a passkey and it syncs through iCloud to all my devices. So my phone can login, my macbook can login. Neat.
But thinking about this a little more, Passkeys on iPhones are secured with FaceID, so to login I have to use FaceID or other biometrics. But on iPhone you can skip the FaceID check if you know the device passcode as fallback. So now, if someone has access to my iPhone and knows my passcode, they have access to ALL my accounts that have Passkeys stored on my iCloud.
Previously, even if an attacker had access to my iPhone, they still wouldn’t be able to login because they don’t know my password. And 1Password itself uses a separate unique password and can’t be skipped with device passphrase.
I’m honestly surprised that we can’t lockdown passkeys on iOS with a separate password/key that’s used to encrypt those. It just seems like I’m giving up on security by switching to passkeys, away from randomly generated passwords. IMHO it should be: FaceID, and if you can’t use biometrics, you HAVE to specify that unique passkey-only encryption phrase to unlock it. Not device passcode.
If not that, then I would have expected passkeys to be factor 1 authentication to directly replace passwords, and then have something else as second factor, such as TOTP/yubikey/SMS auth. But current implementations on any website I’ve seen so far treat Passkey as “ok you’re in”, while going the password route usually triggers a second-factor check.
The forecast was updated at 1:00PM EDT so I believe it ends Tuesday at 8:00AM EDT
Seems like it's decaying, the decaying forecast is linked on their homepage and other space weather sites are forecasting it decaying as well.
[0]: https://www.swpc.noaa.gov/news/geomagnetic-storm-continues-d...
(As far as I know, the p2p requirements mean that on iOS at least, this cannot be implemented as a web app. Again, would love to learn that I’m wrong.)
Slightly OT, but when might one choose to write part of a new app in Objective-C instead of Swift? From my understanding, Swift can call any ObjC code.
I know very little about aviation but this seems like a pretty foolish and avoidable incident. I would have thought there would be a procedure for how many people can leave or hang from the back of the plane as it would very obviously affect how it flies. I can’t imagine that sort of manoeuvre is good for the airframe?
This seems like a weakness of a king air as a jump plane. The pilot has to throttle down the left engine so jumpers aren’t thrown into the tail on exit by the prop wash. I’m happy with our single-engine super caravan with its wide door so not so many people have to be outside to do big groups like this.