Readit News logoReadit News
esrh commented on Show HN: A Lisp Interpreter for Shell Scripting   github.com/gue-ni/redstar... · Posted by u/quintussss
em-bee · 2 months ago
can you give an example of how variable substitution in your language looks like?

one of the things i think a lisp for shell should have, and i agree that this may not be easy, but unix commands should be first class functions, as in, you should not need a $ or sh macro to make them work. the other thing is that strings should not be quoted, and so you need something else to designate variables like $path or ($ path)

esrh · 2 months ago
Yes, i agree that unix commands should be first class. I did this for the super common stuff like ls and cp. As for substitution, I did exactly $ for substitution. You'd do something like ($ rsync -avP $src $dst), but I don't think I ever got around to implementing $() to evaluate forms. If you really need to do that then you have to quasiquote the whole expression and unquote the form you need to evaluate. This has been relatively ok for me though. I never implemented anything like pipes or redirection, I instead just send everything like that to bash.

This is not really relevant to your question, but I regret choosing janet for this, it's too opinionated and hacking on C is not as fun as lisp. I started writing my own version of schemesh in racket, but I never got far enough.

esrh commented on Show HN: A Lisp Interpreter for Shell Scripting   github.com/gue-ni/redstar... · Posted by u/quintussss
esrh · 3 months ago
awesome! I have wanted something like this for a long time. Currently I use a janet fork <https://github.com/eshrh/matsurika> with some trivial additions, the most important of which is a `$` macro that does what the `sh` does here. I have two questions:

- I see that `sh` does not take in strings but instead lisp forms. How do you distinguish between variables that need to be substituted and commands? In my fork, the way to do variable substitution involves quasiquoting/unquoting. - Almost all of the features that make your language good for shell scripting are essentially syntactic features that can easily be implemented as a macro library for say, scheme. Why'd you choose to write in C++? Surely performance is not an important factor here. (I'm interested because I am currently working on a scheme-based shell scripting language).

esrh commented on High-resolution efficient image generation from WiFi Mapping   arxiv.org/abs/2506.10605... · Posted by u/oldfuture
RicDan · 3 months ago
Yeah this seems too insane to be true. I understand that wifi signal strength etc. is heavily impacted by the contents of a room, but even so it seems farfetched that there is enough information in its distortion to lead to these results.
esrh · 3 months ago
A lot of wifi sensing results that have high-dimensional outputs are usually using wideband links... your average wifi connection uses 20MHz of bandwidth and is transmitting on 48 spaced out frequencies. In the paper, we use 160MHz with effectively 1992 input data points. This still isn't enough to predict a 3x512x512 image well enough, which motivated predicting 4x64x64 latent embeddings instead.

The more space you take up in the frequency domain, the higher your resolution in the time domain is. Wifi sensing results that detect heart rate or breathing, for example, use even larger bandwidth, to the point where it'd be more accurate to call them radars than wifi access points.

esrh commented on High-resolution efficient image generation from WiFi Mapping   arxiv.org/abs/2506.10605... · Posted by u/oldfuture
fxtentacle · 3 months ago
FYI the images are not generated based on the WiFi data. The WiFi data is used as additional conditioning for a regular diffusion image generation model. So what that means is the WiFi measurements are used for determining which objects to place where in the image, but the diffusion model will then fill in any "knowledge gaps" with randomly generated (but visually plausible) data.
esrh · 3 months ago
Think of it as an img2img stable diffusion process, except instead of starting with an image you want to transform, you start with CSI.

The encoder itself is trained on latent embeddings of images in the same environment with the same subject, so it learns visual details (that are preserved through the original autoencoder; this is why the model can't overfit on, say, text or faces).

esrh commented on High-resolution efficient image generation from WiFi Mapping   arxiv.org/abs/2506.10605... · Posted by u/oldfuture
phh · 3 months ago
Is there a survey of SoTA of what can be achieved with CSI sensing you would recommend?

What is available on the low level? Are researchers using SDR, or there are common wifi chips that properly report CSI? Do most people feed in CSI of literally every packet, or is it sampled?

esrh · 3 months ago
I'd suggest reading https://dl.acm.org/doi/abs/10.1145/3310194 (2019) for a survey on early methods and https://arxiv.org/abs/2503.08008.

As for low level:

The most common early hardware was afaik esp32s & https://stevenmhernandez.github.io/ESP32-CSI-Tool/, and also old intel NICs & https://dhalperi.github.io/linux-80211n-csitool/.

Now many people use https://ps.zpj.io/ which supports some hardware including SDRs, but I must discourage using it, especially for research, as it's not free software and has a restrictive license. I used https://feitcsi.kuskosoft.com/ which uses a slightly modified iwlwifi driver, since iwlwifi needs to compute CSI anyway. There are free software alternatives for SDR CSI extraction as well; it's not hard to build an OFDM chain with GNUradio and extract CSI, although this might require a slightly more in-depth understanding of how wifi works.

esrh commented on High-resolution efficient image generation from WiFi Mapping   arxiv.org/abs/2506.10605... · Posted by u/oldfuture
esrh · 3 months ago
This is my paper (first author).

I think the results here are much less important and surprising than what some people seem to be thinking. To summarize the core of the paper, we took stable diffusion (which is a 3-part system of an encoder, u-net, decoder), and replaced the encoder to use WiFi data instead of images. This gives you two advantages: you get text-based guidance for free, and the encoder model can be smaller. The smaller model combined with the semantic compression from the autoencoder gives you better (SOTA resolution) results, much faster.

I noticed a lot of discussion about how the model can possibly be so accurate. It wouldn't be wrong to consider the model overfit, in the sense that the visual details of the scene are moved from the training data to the model weights. These kinds of models are meant to be trained & deployed in a single environment. What's interesting about this work is that learning the environment well has become really fast because the output dimension is smaller than image space. In fact, it's so fast that you can basically do it in real time... you turn on a data collection node and can train a model from scratch online, in a new environment that gets decent results with at least a little bit of interesting generalization in ~10min. I'm presenting a demonstration of this at Mobicom 2025 next month in Hong Kong.

What people call "WiFi sensing" is now mostly CSI (channel state information) sensing. When you transmit a packet on many subcarriers (frequencies), the CSI represents how the data on each frequency changed during transmission. So, CSI is inherently quite sensitive to environmental changes.

I want to point out something that most everybody working in the CSI sensing/general ISAC space seems to know: generalization is hard and most definitely unsolved for any reasonably high-dimensional sensing problem (like image generation and to some extent pose estimation). I see a lot of fearmongering online about wifi sensing killing privacy for good, but in my opinion we're still quite far off.

I've made the project's code and some formatted data public since this paper is starting to pick up some attention: https://github.com/nishio-laboratory/latentcsi

esrh commented on 'World Models,' an old idea in AI, mount a comeback   quantamagazine.org/world-... · Posted by u/warrenm
yellow_postit · 4 months ago
Not mentioning Fei-Fei Li and her startup explicitly focused on world models is an interesting choice by the author.
esrh · 4 months ago
They also don't mention the famous paper by Ha & Schmidhuber (https://arxiv.org/abs/1803.10122).

The worst part is that they namedrop many other tangentially related and/or outright fraudulent "ai experts" like Hinton, Bengio, and LeCun.

Deleted Comment

esrh commented on Never Missing the Train Again   lilymara.xyz/posts/2024/0... · Posted by u/thimabi
esrh · a year ago
Yeah, i wish more programs worked like this.

I wrote something similar on a smaller scale for the keihin-kyuukou line in japan: https://rail.esrh.me. Now I live in tokyo and there's several transit options closeby so I would love to have some always on display like this in my room.

Unfortunately, while public transit in the US and Europe seem to be tracked by services with developer friendly APIs, this is not the case in Japan as far as i know -- not that much of a problem back then, i just needed to do some light web scraping.

I wrote all of the scraping/data and processing/frontend code in clojure and clojurescript, and wrote a small blog post about it here: https://esrh.me/posts/2023-03-23-clojure

esrh commented on Common Lisp Is Not a Single Language, It Is Lots   aartaka.me/cl-is-lots... · Posted by u/signa11
tombert · 2 years ago
I still haven't learned any flavor of Common Lisp; I've always played with Clojure and Racket to get my Lisp fix.

Still, people who I really respect have told me that Common Lisp is "better", by whatever definition of the word. I don't really see how it'd be better than Clojure but I was curious if anyone here could explain perks of Common Lisp (any version) over other variants?

esrh · 2 years ago
One thing is that CL has a huge ecosystem and a wide variety of compilers for every purpose you might have. Quicklisp probably beats racket and scheme in this regard, but today clojure might have an edge.

When you use racket and clojure specifically, you're kind of walling yourself into one compiler and ecosystem (two, for clj/cljs). This is a significant disadvantage compared to scheme and CL.

CL the way most people write it is way too imperative and OO for me; it reads like untyped java with metaprogramming constructs. Clojure and scheme in my opinion guide you towards the more correct and principled approaches.

Out of all the lisps, regardless of which ones I like the most, I objectively write emacs lisp the most. This is has definitely influenced my opinions on CL syntax; like my weird love hate relationship with the loop macro: on one hand it's a cool, tacit construct that's impossible in a non-lisp, but on the other hand it hides a lot of complexity and is sometimes hard to get right.

u/esrh

KarmaCake day354June 13, 2020
About
esrh.me
View Original