Readit News logoReadit News
samwillis · 3 years ago
Previous discussed a couple of weeks ago: https://news.ycombinator.com/item?id=36059878 (101 comments)
dang · 3 years ago
Thanks! Macroexpanded:

DeviceScript: TypeScript for Tiny IoT Devices - https://news.ycombinator.com/item?id=36059878 - May 2023 (101 comments)

iamflimflam1 · 3 years ago
One of the biggest challenges will be drivers for the actual hardware - at the moment it looks like they have support for SPI and for built in LEDs. But it's a big challenge to expose all the different peripherals that come with MCUs in a consistent way.

Actually, looks like they've got quite a lot of stuff done already:

https://microsoft.github.io/devicescript/api/clientshttps://microsoft.github.io/devicescript/api/servers

user_account · 3 years ago
With MicroPython you can have C drivers and just do the MicroPython bindings. I do not see the benefits of having yet another interpreted IoT language and it does not make sense to do the drivers in the interpreted language itself.
Ins43b · 3 years ago
I think having a strongly typed language is a great reason to have another interpreted language for IoT/embedded.
jononor · 3 years ago
They should integrate with Zephyr OS, as they have a good generic sensor API, with tons of device drivers and platform support.
AlotOfReading · 3 years ago
I wouldn't say zephyr has "tons" of drivers, except maybe in comparison to other RTOSes. It has a few drivers for each of the device types you might use, but you can't just buy random hardware without checking support the way you can with Linux. There's enough that you have examples for implementing your own drivers pretty easily though.
nicce · 3 years ago
Is the only reason for this to make it possible for web developers or people who know TypeScript to write code for IoT Devices? To fill the lack of experienced low level programmers? Because as language alone, I don't see a reason why I should I ever use this if I am not familiar with TypeScript already.
jqpabc123 · 3 years ago
There are a number of reasons to do this. This sort of setup typically has the VM runtime flashed into the microprocessor's program space with the interpreted byte code stored in data space --- either internal or external.

1) It is obviously not as fast as native but it is fast enough for a lot of applications. In embedded work, you don't get extra credit for being faster than necessary. Speed critical functions like communications are mostly handled at native speed by the VM.

2) Code size. Interpreted byte code (minus the VM) can be smaller and less redundant than native. And by adding cheap external storage, code can easily expand beyond the native program space of the micro.

3) Easy remote updates. Byte code can be received and stored without making changes to the micro's program code (no re-flashing required). It's possible to fix bugs and make changes or even completely repurpose the hardware from afar.

ndiddy · 3 years ago
>In embedded work, you don't get extra credit for being faster than necessary.

For battery powered devices, you absolutely do. When you're talking over a slower protocol, being able to put the processor to sleep for 95% of the time can translate to pretty massive power savings.

riceart · 3 years ago
> In embedded work, you don't get extra credit for being faster than necessary.

You absolutely do when you can cut power requirements and get by with cheaper CPU/hardware. I ran a whole consulting business redesigning poorly designed devices and redoing firmware for cost reduction. How does one decide “necessary”, what is necessary in the short and long term are often not the same.

lioeters · 3 years ago
3 is a big one, I think. Using an interpreted language speeds up the development cycle to prototype, change, release, and iterate. And for some purposes it's fast and small enough, that the trade-off is worth it.
numpad0 · 3 years ago
I’m not sure if you get the controllers part of embedded microcontrollers. They’re not for time-independent deep thoughts, they are for controlling devices and peripherals in real time.

1) you get shorter battery life for slower code. Also, everything is speed critical anyways. It sucks when I’m controlling my toy robot and it’s trying to drive into the floor while doMyStuff(); has its head stuck in float math.

2) external anything is added cost to everything; adding an SPI Flash for code to otherwise done board costs, I don’t know, $2 for the chip, $0.02 for capacitors and resistors, 2-3 days to redo the board, software… can I just make it a want item for v2.0?

3) why do I have to do it wireless? Can I just snatch a prototype from test batch, wire it to debugger and leave it on a desk? Do I really have to manage these keys, and why are you going to be signing it off for release if it’s not working?

Embedded devices are not like a Nintendo Switch, it’s like Switch Joy-Cons, or even buttons on Joy-Con sometimes. They are not like nail-top autonomous Pi calculation device. Admittedly, Nintendo update Joy-Con firmware sometimes, but very rarely, and they don’t make Switch playing kids wait for X button input to finish processing. The buttons are read, sent out, and received. It just makes no sense that adding drag to real-time computing this way would help a lot.

FpUser · 3 years ago
>"In embedded work, you don't get extra credit for being faster than necessary"

You do get credit for using the cheapest and lowest cost MCU for the task which directly depends on performance of the code. In case of battery operated devices it is even more important.

mikepurvis · 3 years ago
The debuggability is also far better I expect, as a person who has spent hours tracing some crash deep in LwIP because of a bad flag or wrong interrupt priority.
dromtrund · 3 years ago
3) isn't really practical without some storage mechanism though. Sure, you can make a change that sits in ram until the next power cycle, but you could do that with firmware too if you plan for it. Whether you store the raw data in executable flash or in some external eeprom doesn't really change the workflow much.
delfinom · 3 years ago
>Is the only reason for this to make it possible for web developers or people who know TypeScript to write code for IoT Devices? To fill the lack of experienced low level programmers?

Which is funny because there's no lack of low level programmers in my experience.

Companies just try and pay Embedded Systems Engineers and Electrical Engineers dirt compared to "Software Engineers". In the same part of NY where SWE base salaries are $160k+, they will offer $120k for the same years of experience to a EE/Embedded SWE. And these are both major companies (non-big tech) and small companies that will do this.

Of course I also see those job postings for those specific companies last a long time here. There's one going for 3 years now for a many decades old small defense contractor that's even had their CTO directly reach out to me. Meanwhile I sit pretty in my actually respectable paying embedded job that I ain't jumping ship for a paycut.

milanove · 3 years ago
I wonder if many of these lowball salary job postings could just be companies applying for an employee's greencard through the PERM system. IIUC, for PERM, the applicant's employer is required to put out a job posting for at least 10 days to show nobody else can do the job their employee/greencard applicant does. The salary they list in the posting must be at least the minimum wage dictated by the Department of Labor for the role.

However, I suspect that the company making the ad will just list the lowest possible salary in the posting to deter real applicants from applying, hence making the greencard applications smoother.

However, don't quote me on this, since this is just my very vague knowledge on how greencard applications work. Somebody else here who knows more about this topic, please chime in to let me know if this is true.

devmunchies · 3 years ago
In my experience, a firmware engineer contractor is much more expensive than a web contractor. But that's probably just a contractor supply/demand thing vs full-time.
iamflimflam1 · 3 years ago
The same could be said for MicroPython or CircuiyPython.

I suspect the target audience is more on the hobbyist/non embedded programmer side of things.

mmoskal · 3 years ago
There are some differences, but broadly correct (eg., the program is precompiled and the experience is definitely the best in VSCode; plus of course different language).

However, I heard of uPython being used in production deployments, though maybe not in millions of units.

(I'm working on DeviceScript)

mananaysiempre · 3 years ago
The thing is, if you want to poke at a device interactively, your options are either Forth or (if the system is beefy enough) one of these scripting language ports; tethered C interpreters are not precisely commonplace. And while I love Forth to bits, most people will probably go the scripting-language route.
nanidin · 3 years ago
I started my career in C/C++ and embedded. Every time I go back to work on embedded stuff for fun, it's like going back in time and losing all of the nice things that come with languages like TypeScript or JS. Suddenly things like associative arrays, maps, sets, lists, vectors - all require much more mental overhead to deal with and ultimately get in the way of what I actually want to be doing.
naikrovek · 3 years ago
TypeScript development on VScode is excellent (I'm told) so that would be a reason for this.

Excellent tooling exists for you if your language is TypeScript, so maybe try putting TypeScript in more places.

tentacleuno · 3 years ago
It most certainly is. I've tried a lot of development solutions over the years, and always find myself coming back to VSCode. The TypeScript support is amazing.
iampims · 3 years ago
it's not C, so that's a great step towards making programming IoT more accessible.
TheLoafOfBread · 3 years ago
Well that's valid as long as you don't get exception in underlying C driver. Now you are solving two possible problems - error in your script or error in the driver. Good luck debugging that without proper debugging probe.

Dead Comment

danielEM · 3 years ago
My personal opinion is that good typescript to C transpiller would do a way better job, tons of microcontrollers supported out of the box. I would be also happy to use it for desktop. With some tricks it could even support references and additional numeric data types (u8, i8 ... ) without breaking any syntax
simlevesque · 3 years ago
Pure Typescript supporting all of the Ecmascript spec would have to output a lot of C to get the same results.
hutzlibu · 3 years ago
It obviously would have to be a limited subset, like devicescript is also "just" a subset of typescript.

I think the limitations for devicescript would probably also work for outputting reasonable amounts of C.

devmunchies · 3 years ago
I think a language that just transpiled to the equivalent C would be pretty awesome. I know Google is building Carbon, but it is more focused on C++. They pitch it as Typescript to JavaScript, Scala to Java, Carbon to C++.

Instead of Rust or Zig trying to replace C++ or Java, is seems better to just integrate with it without linking through some FFI.

I'm working on some C code for some microcontroller since it was too difficult to use Rust.

elcritch · 3 years ago
I use Nim on embedded precisely for that reason: https://github.com/elcritch/nesper

I wtapped much of zephyr as well but that ones less used: https://github.com/embeddednim/nephyr

impulser_ · 3 years ago
I could be wrong, but doesn't vlang and nim do this?
jeppester · 3 years ago
I'm not an expert, but wouldn't garbage collection be a difficult problem as well?
hutzlibu · 3 years ago
It would have to include its own garbage collector, yes.
onimishra · 3 years ago
Does such a thing exist? I would love something like that, but is it even feasible? Isn’t there a lot more you need to be aware of, to make a translation of say TS’s objects into C?
danielEM · 3 years ago
These are far from perfect, but still something:

https://github.com/andrei-markeev/ts2c/

https://github.com/evanw/thinscript

If you aim for 32 bit microcontrollers then you can go with assemblyscript to wasm and then with wasm to C transpiller

afiori · 3 years ago
They would likely have to severely restrict the range of supported types, for example `window` is likely impossible to compile meaningfully.
andrewstuart · 3 years ago
I really want to be able to compile typescript to not javascript. Ideally to native but if it’s bytecode or whatever that’s fine I don’t care.

It’s a nightmare trying to deal with bundling and compiling and targets and different import schemes ugh.

I wish I could compile my programs it compiles all the stuff in node modules and gives me some working code.

Desperate to get away from nodejs I tried deno and bun …. neither of them are anything close to a drop in replacement for nodejs.

black_puppydog · 3 years ago
So you want structural typing, but for native?

Funny, the fact that typescript is "only" structurally typed is one of my main pain points with the language. (Of course it's tons better than vanilla JS)

LudwigNagasena · 3 years ago
What exactly do you miss? JS has built-in reflection, eg `typeof instance`, `instance.constructor.name`. Even multiple dispatch can be hacked together if you really need it, eg there is a library @arrows/multimethod.
k__ · 3 years ago
What about AssemblyScript?
Benjamin_Dobell · 3 years ago
> == and != operators (other than == null/undefined); please use === and !===

That's a whole lot of equals signs.

Typos aside, this all looks really amazing!

Although by no means sound, the ergonomics of TypeScript's type system are phenomenal. Really glad to see additional use cases beyond traditional JS runtimes.

thangngoc89 · 3 years ago
Typescript has to inherit this typing madness because it’s strictly a Javascript superset. As a decade long JS developer, I avoid == and != like plague because of type castings and funny results that I don’t bother to remember.
littlecranky67 · 3 years ago
!=== is a typo and should probably be !==
biosboiii · 3 years ago
Main thing everyone overlooks: If you can run the same program in C on a MCU for 5 cents less, they are absolutely gonna go with that.

Cost cutting in high volume electronics is crazy.

jononor · 3 years ago
There are plenty of usecases for electronics which is not high volume. And many cases where the BOM margin are not the cost driver, such as when installation/setup costs dominate. And projects where cost is secondary to thing like time-to-market, ability to integrate/customize et.c.
geijoenr · 3 years ago
Is really hard for me to understand how running an VM on a resource constrained device has any benefit. There is a reason why those devices run using very lightweight "OS"s like FreeRTOS and Embedded C.

Why the constant obsession to apply a technology designed for a specific purpose everywhere else, even when it doesn't make sense?

cprecioso · 3 years ago
Say that to the millions of ID cards, transport cards, SIM cards, and other smart cards with a secure element that run Java (a lot of times only powered by the small current from the NFC tap).
iamflimflam1 · 3 years ago
Java was originally intended for embedded devices...
pjc50 · 3 years ago
It's not a bad decision for scripting provided the VM is lightweight enough. Things like "FORTH interpreter" or the old "BASIC stamp" microcontrollers. And it provides a degree of robustness vs running arbitrary binaries.
mrguyorama · 3 years ago
The Apollo program went to the moon with a complex VM on top of an extremely limited physical architecture. That's actually one of the main reasons to do it, because you can implement a strictly better (in all sorts of ways) virtual machine on top of really awful hardware.

Not to say that's valid in this instance, but plenty of early VMs were entirely made to improve resource constrained hardware

schwartzworld · 3 years ago
Making it easy for hobbyists who already know that technology to have access. Micropython has been successful, and this is an alternative to that.
geijoenr · 3 years ago
The github project indicates "DeviceScript brings a professional TypeScript developer experience to low-resource microcontroller-based devices."

If you tell me is a toy, and somebody's pet project: fine. Is all about having fun.

But then don't mention "professional" in the project description.

suprfnk · 3 years ago
Easy: because TypeScript or Python are way easier to learn than C. Learning C is a long, arduous, uphill battle against arcane error messages and undefined behaviour.

Unless you have a background in C/C++ already, most people can probably get up and running with something like this way, way faster.

littlecranky67 · 3 years ago
Good luck understanding things like `if(!!!c) { ... }` or why a line-break after a return statement matters in JavaScript/TypeScipt ;) JS has its own footguns and legacy baggage.
TheLoafOfBread · 3 years ago
How do you get that TypeScript or Python environment on the chip of your interest at the first place? How do you expose hardware interfaces without knowledge of C?
jononor · 3 years ago
One of they main reasons was that they had to: the cost of a more capable system was too high. In the last years that has improved drastically, and there are many usecases where the 5 USD increase in BOM needed to run JS/Python etc can be justified.
mmoskal · 3 years ago
Exactly! But it's more like 1.50 USD (ESP32-C3 or RP2040 compared to say STM8).
pjmlp · 3 years ago
8 and 16 bit home computing says hello.
TheLoafOfBread · 3 years ago
I agree, this will mostly go nowhere. Sure when somebody will prepare you DeviceScript environment for *your* board, then you are good to go. But in 99% of cases, you will get hardware in front of you which almost certainly is not supported by DeviceScript. And now without intimate knowledge of C, how are you going to expose interfaces of that particular hardware, so you can work with those interfaces in DeviceScript? Well you won't, you need to know C first.

Same problem for MicroPython. Same problem for LUA, same problem for any scripting language running on constrained MCU.

jononor · 3 years ago
The target audience for such runtimes are teams with general software engineer skills, and less embedded skills, and little hardware skills. They are likely to weight software support (including drivers) very heavily when selecting hardware. This reduces how often the scenario you describe will come up, compared to traditional hardware development.
classified · 3 years ago
A VM can make all the gaping security holes portable between IOT devices.
manmal · 3 years ago
I appreciate the tongue-in-cheek, but I think there‘s really the chance for better IoT security when using a VM. Those things are connected to the internet (duh) and sandboxing is probably a good idea. You obviously don’t need a VM for that, but maybe the tradeoffs are favorable.