Readit News logoReadit News
bglusman commented on Why I stopped using JSON for my APIs   aloisdeniel.com/blog/bett... · Posted by u/barremian
wilg · 2 months ago
One of the best parts of Protobuf is that there's a fully compatible JSON serialization and deserialization spec, so you can offer a parallel JSON API with minimal extra work.
bglusman · 2 months ago
Yes! Came to comments to see if that was discussed/commented on above with link: https://protobuf.dev/programming-guides/json/
bglusman commented on Why I stopped using JSON for my APIs   aloisdeniel.com/blog/bett... · Posted by u/barremian
catchmeifyoucan · 2 months ago
I wonder if we can write an API w/ JSON the usual way and change the final packaging to send it over protobuf.
bglusman · 2 months ago
Sure... https://protobuf.dev/programming-guides/json/

I was pushing at one point for us to have some code in our protobuf parsers that would essentially allow reading messages in either JSON or binary format, though to be fair there's some overhead that way by doing some kind of try/catch, but, for some use cases I think it's worth it...

bglusman commented on AI World Clocks   clocks.brianmoore.com/... · Posted by u/waxpancake
whoisjuan · 3 months ago
It's actually quite fascinating if you watch it for 5 minutes. Some models are overall bad, but others nail it in one minute and butcher it in the next.

It's perhaps the best example I have seen of model drift driven by just small, seemingly unimportant changes to the prompt.

bglusman · 3 months ago
We can't know how much is about the prompt though and how much is just stochastic randomness in the behavior of that model on that prompt, right? I mean, even given identical prompts, even at temp 0, models don't always behave identically.... at least, as far as I know? Some of the reasons why are I think still a research question, but I think its a fact nonetheless.
bglusman commented on Some Smalltalk about Ruby Loops   tech.stonecharioteer.com/... · Posted by u/birdculture
shevy-java · 4 months ago
Agreed. While I think matz is a great language designer, I loved Alan Kay's philosophy. I'd like some language that is OOP centric in nature, fast, has an elegant syntax and learns from erlang's model (elixir isn't it unfortunately, but some ideas it got right).
bglusman · 4 months ago
If you want OOP than, yes, Elixir isn't it... maybe Pony? Curious what else you don't like about Elixir though besides not being OOP... it's definitely got messaging!
bglusman commented on Show HN: WeUseElixir - Elixir project directory   weuseelixir.com/... · Posted by u/taddgiles
taddgiles · 5 months ago
WeUseElixir is a curated directory of apps, libraries and companies that use the Elixir programming language.

A few years ago I was introduced to Elixir. It was the first functional programming language I'd ever used. I became a huge fan of the language and the community.

I've now used Elixir in a variety of different projects both professional and personal. It's become my go-to language for building web applications. It is just fun to work with.

I created WeUseElixir as a way to increase awareness of the Elixir language and how it's being used. WeUseElixir provides a place for creators to share their projects and allows others to discover new and interesting projects.

bglusman · 5 months ago
Nice! There’s also this[0] project run by community /elixir-school[1] maintainers [0]https://elixir-companies.com/en [1]https://elixirschool.com/en
bglusman commented on Booting 5000 Erlangs on Ampere One 192-core   underjord.io/booting-5000... · Posted by u/ingve
ThinkBeat · 6 months ago
I would be much more interesting in seeing 5000 under heavy load.

Just being able to star that many instances is not that exciting until we know what they can do.

bglusman · 6 months ago
Erlang handles heavy load VERY well, between work stealing schedulers and soft realtime via reduction counting (any program can be interrupted and stopped after any instruction and resumed transparently)
bglusman commented on Booting 5000 Erlangs on Ampere One 192-core   underjord.io/booting-5000... · Posted by u/ingve
zozbot234 · 6 months ago
> Erlang, at least the programming model, lends itself well to this, where each process has a local heap.

That loosely describes plenty of multithreaded workloads, perhaps even most of them. A thread that doesn't keep its memory writes "local" to itself as much as possible will run into heavy contention with other threads and performance will suffer a lot. It's usual to try and write multithreaded workloads in a way that tries to minimize the chance of contention, even though this may not involve a literal "one local heap per core".

bglusman · 6 months ago
Yes, but in Erlang, everything on every process is immutable and nothing is ever trying to write anywhere besides locally. Every variable assignment leaves the previous memory unchanged and fully accessible to anything directly referencing it.
bglusman commented on Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model   github.com/KittenML/Kitte... · Posted by u/divamgupta
bglusman · 6 months ago
It looks like it's Python, so it might be possible to use via https://github.com/livebook-dev/pythonx ? But the parallel huggingface/bumblebee idea was also good, hadn't seen or thought of, that definitely works for a lot of other models, curious if you get working! Some chance I'll play with this myself in a few months, so feel free to report back here or DM me!
bglusman · 6 months ago
I just decided to try this quickly and hit some issues on my Mac FYI, it might work better on Linux but I hit a compilation issue with `curated-tokenizers`, possibly from a typo in setup.py or pyproject.toml in curated-tokenizers, spotted by AI: -Wno-sign-compare-Wno-strict-prototypes should be -Wno-sign-compare -Wno-strict-prototypes so could perhaps fix with a PR to curated-tokenizers or by forking it...

Might well be other issues behind that, and unclear if need any other dependencies that kitten doesn't rely on directly like torch or torchaudio? but... not 5 mins easy, but looks like issues might be able to be worked through...

For reference this is all I was trying basically:

  Mix.install([:pythonx])

  Pythonx.uv_init("""
  [project]
  name = "project"
  version = "0.0.0"
  requires-python = ">=3.8"
  dependencies = [
    "kittentts @ https://github.com/KittenML/KittenTTS/releases/download/0.1/kittentts-0.1.0-py3-none-any.whl"
  ]
  """)
to get the above error.

bglusman commented on Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model   github.com/KittenML/Kitte... · Posted by u/divamgupta
thedangler · 6 months ago
Elixir folks. How would I use this with Elixir? I'm new to Elixir and could use this in about 15 days.
bglusman · 6 months ago
It looks like it's Python, so it might be possible to use via https://github.com/livebook-dev/pythonx ? But the parallel huggingface/bumblebee idea was also good, hadn't seen or thought of, that definitely works for a lot of other models, curious if you get working! Some chance I'll play with this myself in a few months, so feel free to report back here or DM me!

u/bglusman

KarmaCake day669September 1, 2009
About
PepsiCo E-Commerce / Recurse Center (recurse.com)

brian.endofmyusername@pepsico.com (for work related stuff) or brian@endofmyusername.me for personal/hacker stuff.

View Original