Readit News logoReadit News
nealeratzlaff commented on Show HN: Building a 42-inch E-Ink frame for generative art   eliot.blog/e-ink-frame... · Posted by u/ea016
nealeratzlaff · 2 years ago
This is really cool! I've wanted to buy something similar for a long time, but in RGB. I know Samsung makes the Frame, but its not hackable, and its a waste of power. I don't ever want the thing to function as a TV, just a way to display generative art.

At one time there apparently was this: https://mono.frm.fm/en/shop/ But the price was crazy. There's not much out there in terms of appropriately priced digital picture frames.

nealeratzlaff commented on Ask HN: What weird technical scene are you fond/part of?    · Posted by u/ForgotIdAgain
etrautmann · 3 years ago
Algorithmic art and pen plotters - super fun and wonderful community.
nealeratzlaff · 3 years ago
I also try to be part of the algorithmic/generative art community. It seems really scattered but the variety of techniques and ideas flowing around is really inspiring :)

After stable diffusion/midjourney, the community is a little leery of deep learning I've noticed. But I'm trying to carve out a space using neural networks in a different way) anyway.

nealeratzlaff commented on I've said it all before but here we go again   backreaction.blogspot.com... · Posted by u/paulmooreparks
bjornsing · 3 years ago
I only have a master in (engineering) physics but working with statistical modeling and AI I’ve come to appreciate the “all models are wrong but some models are useful”-mindset, and I’ve started applying that to physical “laws” as well: I no longer see them as some divine truths waiting to be discovered, but more like models of the world that will always be wrong but sometimes useful.

From that point of view what’s happening in physics today is no surprise, but it is a bit depressing: we’ve probably passed the level of complexity where models are useful and are now adding detail that make them less so. I guess you can see it as a form of overfitting, like when less scrupulous AI researchers use the test set for validation.

nealeratzlaff · 3 years ago
I used this quote in the heading of my dissertation on uncertainty in deep learning, and I think its simultaneously true for lots of fields. In ML research we celebrate these enormous models that do everything (GATO, Flamingo, CoCa, etc.) since it feels like we're getting close to something real or universal. I imagine particle physicists feel something similar about expanding the standard model (SUSY(?), quantum gravity),

So in the sense that people get excited about science (see AI lately), I think these models are useful, even if they are pretty mis-specified in the grand scheme of things. I can't speak to Sabine's specific frustrations, but it sounds a little bit like Gary Marcus's concerns about neural networks. In my day-to-day I definitely value a useful model over an exact one.

nealeratzlaff commented on DALL·E 2 prompt book [pdf]   dallery.gallery/wp-conten... · Posted by u/tomduncalf
soperj · 4 years ago
Honestly, I have a great use case for it currently, but then I realized it can only do square pictures, when I really want something that is much wider than it is tall.
nealeratzlaff · 4 years ago
You can get DALLE-2 to give different sized images by using their tool to crop out most of the image you already generated, then have it inpaint a completion. You can then use any image editing tool to combine the two images.
nealeratzlaff commented on World Ice Theory and the Supernatural Imaginary of the Third Reich (2017)   laphamsquarterly.org/roun... · Posted by u/lermontov
galaxyLogic · 6 years ago
I never quite understood why many religious people have a problem with Evolution. If God created everything, surely he could create Evolution as well.
nealeratzlaff · 6 years ago
From a lot of otherwise science-accepting people I grew up with, the issue isn't that penguins or insects adapted to their environment in unique ways. The issue lies in humans coming from monkeys. People I've known have a hard time with us not being created as special companions for the creator.
nealeratzlaff commented on AI and Deep Learning in 2017 – A Year in Review   wildml.com/2017/12/ai-and... · Posted by u/MrQuincle
hbt · 8 years ago
the fact this thread got over 100 points without a comment or discussion proves what I suspected.

most of us are observers when it comes to this tech. sure, there is some quick tutorial to build a "learning model" somewhere or read an article to grasp what is being discussed...

but when it comes to contributing something remotely significant, we got nothing. whether it is due to lack of resources (giant datasets and computing power) or lack of knowledge and experience to even theorize something plausible.

this is even more frustrating knowing how important of a role this tech will play in the future.

being a passive observer is not enough.

I'm looking forward to 2018 AI summary.

nealeratzlaff · 8 years ago
I think for most people that hurdle of understanding is immense, and isn't likely to get any better. ML/DL infrastructure will continue to improve and deploying AI systems will become more accessible as time goes on. But I don't see familiarity with (non) convex optimization, statistical learning, topology, etc. becoming mainstream. Without these its hard to see a reasonable path forward for AI.

For me this year in DL was a lot narrower. it mostly revolved around systems becoming more human in capability. I think most of this is in the article already:

We can now hopefully say that ImageNet has been solved.

AlphaZero tackled structured games with a single algorithm [1].

Tacotron2 produced completely passible TTS [2]

GANs improved dramatically and saw some good theory to back it up [3] [4].

We also started to care more about how well our models do in general, which to me shows maturity:

Adversarial examples showed their teeth, and hopefully convinced everyone to care about robust models [5] [6]. Reinforcement learning algorithms were shown to have poor transferability [7].

In 2018 I hope to see a new kind of CNN. Residual style networks are the norm now, since they mostly solve problems with gradient flow. But take away all the skip connections and we're mostly left with a vgg-style linear net with box filters. I'd be really excited to see a network with image-sized conv filters that could adapt their shape (and therefore representational power) to a given feature or signal.

Hopefully 2018 is the year where people stop calling AI a one-trick-pony. I don't hear it as often these days, but I think its time to put that phrase to rest.

[1] https://arxiv.org/abs/1712.01815

[2] https://research.googleblog.com/2017/12/tacotron-2-generatin...

[3] https://arxiv.org/abs/1710.10196

[4] https://arxiv.org/abs/1701.04862

[5] https://arxiv.org/abs/1707.07397

[6] https://arxiv.org/abs/1710.06081

[7] https://arxiv.org/abs/1709.06560

u/nealeratzlaff

KarmaCake day13May 22, 2015View Original