You can window the wavelet, then slide the finite-duration wavelet by a few samples at a time, even if the wavelet is hundreds to thousands of samples long. This is possible in STFT as well (each part of the original signal shows up in many separate FFTs).
Again, I don't know the implementation details of wavelet transforms. Maybe I'll look into your repo when I have time. What's your asymptotic and practical runtime?
It is easier to configure (less parameters, there is no need for a window function), offers more flexibility (exponential frequency band; e.g. for music scales) and can reach the Gabor-Heisenberg uncertainty limit without artifacts.
The only downside is that you need to know the entire signal in advance, so it can only be used for recordings.
Shameless self-promo of my implementation: https://github.com/Lichtso/CCWT
And to save folks a click:
> The Client VM compiler does not try to execute many of the more complex optimizations performed by the compiler in the Server VM, but in exchange, it requires less time to analyze and compile a piece of code. This means the Client VM can start up faster and requires a smaller memory footprint.
There's literally so many just insane things about the JVM and its variants, I totally get why some folks are like "JAVA OR DEATH." I just wish I had started learning it 20 years ago, like a lot of 'em, so it wasn't such a gigantic wall of pedantic knowledge to acquire.
Just copy Rust. Trivially copyable types aren't invalidated by moves, and "move" just aliases copy.
What if you move a unique_ptr from a std::vector? You don't know which elements of the vector need to have their destructors run.
I think Rust unconditionally doesn't run the destructor of a moved-from Box, but uses drop flags for "maybe-moved-from" local variables, and doesn't allow maybe-moved-from Vec elements.
I do not see it that way. A whole generation of students is growing with the wrong impression that writing a "for" loop is inevitably inefficient. Also, they believe that it is OK for large arrays of numbers not to be a core language construct, requiring an external library like numpy. These two incorrect beliefs are incredibly damaging in my view.
In a sane programming environment (e.g., with jit compilation), writing a matrix product using three loops should be just as efficient than calling a matrix product routine. The matrix product should also be rightly available without need to "import" anything.
On the other hand, in C++, hand-rolled matrix multiplication is both slower and an order of magnitude less accurate than MKL (or possibly OpenBLAS too).
My favorite screenshot is the last one, it looks like a generic mobile Firefox error for a misspecified URL.
One possible issue is that Fennec/Fenix doesn't open http URLs in external apps by default (unsure about custom protocols), whereas Chrome does.
- You'd have to ensure that your large data structure gets allocated entirely within the special region. That's simple enough if all you have is a big array, but it gets more complicated if you've got something like a map of strings. Each map cell and each string would need to get allocated in the special region, and all of the types involved would need new APIs to make that happen.
- You'd have to ensure that data structures in your special region never hold references to anything outside. Since the whole point of the region is that the GC doesn't scan it, nothing in the region will be able to keep anything outside the region alive. Any external references could easily become dangling pointers to freed memory, which is the sort of security vulnerability that GC itself was designed to prevent.
All of this is doable in theory, but it's sufficiently difficult, and it comes with sufficiently many downsides, that it makes more sense for a project with these performance needs to just use C or Rust or something.