I've recently had to implement a few kernels to lower the memory footprint and runtime of some pytorch function : it's been really nice because numba kernels have type hints support (as opposed to raw cupy kernels).
I've recently had to implement a few kernels to lower the memory footprint and runtime of some pytorch function : it's been really nice because numba kernels have type hints support (as opposed to raw cupy kernels).
package color
type Color struct {
val string
}
func (c Color) String() string {
return c.val
}
var (
Red = Color{val: "red"}
Green = Color{val: "green"}
Blue = Color{val: "blue"}
)
Since `val` is not exported, external packages cannot create arbitrary `Color`.- DuckDuckGo for Mac gives you privacy by default
- DuckDuckGo for Mac is really fast!
- DuckDuckGo for Mac is built for security.
.. isn't that just (almost/good enough) Safari these days? Plus with Safari you typically get better battery life. And Apple's monetisation model makes me feel like they'll treat my privacy even better than a search engine. The only reason why sometimes I miss/use browsers like Brave is because of the Chrome Extensions.
But it feels like a slow errosion of our control and ownership of our tools. Where everything is becoming a rent-seeking opportunity and good tools are made available for a monthly rent.
Personally I like having my whole build system, IDE, CI/CD on a machine I work at. I get this might not be for everyone, but I think we need to be careful what we give up long-term for these conveniences.
Granted, I could just use VI and a terminal and nobody is forcing anyone to use anything ... but like many things, they are not like-choices.
I depend on my tools, and the fewer dependencies to paid-montly SaaS features the better.
As a student, I have a desktop at one of my parents' house that I can control over ssh, this kind of features make remote development much easier and is often needed when I run an intensive task for hours. The experience with VSCode over ssh is really great. Some have pointed out local VMs, which is another use for this.
Not perfect, rarely returns DE results instead of English, but from my point of view they're doing something good and I'm sold.
But please, give me a way to pay for it. I don't want to be the product, one day.
It claims to be diffusion-based, but the main 2 differences from an approach like Stable-Diffusion is that (1) they only consider a single step, instead of a traditional 1000 and (2) they directly predict the value z^y instead of a noise direction. According to their analyses, both of these differences help in the studied tasks. However, isn't that how supervised learning has always worked ? Aside from having a larger model, this isn't very different from "traditional" depth estimation that don't claim anything to do with diffusion.
It also claims to have zero-shot abilities, but they fine-tune the denoising model f_theta on a concatenation of the latent image and apply a loss using the latent label. So their evaluation dataset may be out-of-distribution, but I don't understand how that's zero-shot. Asking ChatGPT to output a depth estimation of a given image would be zero-shot because it hasn't been trained to do that (to my knowledge).