Readit News logoReadit News
demosthanos · 2 months ago
Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!

Previously discussions of Ink:

July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961

May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837

Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639

ccbikai · 2 months ago
Many CLI applications are now using Ink to write their UIs.

I suspect React will eventually standardize all UI writing approaches.

amelius · 2 months ago
I'd rather apt-get install something.

But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.

So yeah, I get why this exists.

halJordan · 2 months ago
What is the complaint here? There are plenty of binaries you can invoke through your cli that will query a remote llm api
gsibble · 2 months ago
We made this a while ago on the web:

https://terminal.odai.chat

gbacon · 2 months ago
Wow, that produced a flashback to using TinyFugue in the 90s.

https://tinyfugue.sourceforge.net/

https://en.wikipedia.org/wiki/List_of_MUD_clients

dncornholio · 2 months ago
Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.
demosthanos · 2 months ago
I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?
xigoi · 2 months ago
It’s not clear from the README what providers it uses and why it needs your GitHub username.
ccbikai · 2 months ago
Connects to any OpenAI-compatible API.

Using a GitHub username prevents abuse.

gclawes · 2 months ago
Is this doing local inference? If so, what inference engine is it using?
demosthanos · 2 months ago
No, it's a thin wrapper around an API, probably OpenRouter or similar:

https://github.com/ccbikai/ssh-ai-chat/blob/master/src/ai/in...

ccbikai · 2 months ago
Currently using the OpenAI API to access multiple models. You can use ollama to access local inference models
ryancnelson · 2 months ago
this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?