Readit News logoReadit News
JAlexoid · 4 months ago
This is not an AI agent, this is just a CLI for openrouter.ai with minor whistles.
cortesoft · 4 months ago
Well, it takes output from AI and executes commands, so it fits the definition of an AI agent.
Lockal · 4 months ago
Not even that.

"Agent-C: a 4KB AI agent" - my first thought was: obviously they did not fit any model to that size! They probably just wrote an http client, right? Wrong, they... call curl! Not even use curl API. Well, at least it handles encryption.

Bonus: command injection

  OR_KEY="abc' ; rm -rf / ;" ./agent-c

liszper · 4 months ago
Agentic AI is when an LLM uses tools. This is a minimal but complete example of that.
SamInTheShell · 4 months ago
Bro, you gave it all the tools. I wouldn't call this minimal. Throw it in a docker container and call it good. :)
brabel · 4 months ago
I thought for a moment that LLM quantization had become a whole lot better :)
hedayet · 4 months ago
82 upvotes so far. Seems like HN readers engage more with headlines than the body of the post itself.
metalliqaz · 4 months ago
it was ever thus
SequoiaHope · 4 months ago
> License: Copy me, no licence.

Probably BSD or Apache would be better, as they make it easier for certain organizations to use this. If you want to maximize copying, then a real permissive license is probably marginally better.

MrGilbert · 4 months ago
CC0 would be in the spirit of what OP envisioned.

https://creativecommons.org/public-domain/cc0/

SequoiaHope · 4 months ago
Ah right good point, I forgot Creative Commons, which I don’t usually use for code.
liszper · 4 months ago
Updated to CC0!
MadnessASAP · 4 months ago
I think the WTFPL is closer

https://www.wtfpl.net/txt/copying/

jstummbillig · 4 months ago
I suspect the goal is not to make anything easier for any corp
asimovfan · 4 months ago
then you use GPL.
divan · 4 months ago
master-lincoln · 4 months ago
Better go GPL so organizations using it have to open source any improvements they make
SequoiaHope · 4 months ago
The author apparently wanted no restrictions on distribution, so GPL is not the right choice.
Der_Einzige · 4 months ago
GPL has never been enforced in court against anyone with serious money. It’s not worth the virtual paper it’s written on.
bobmcnamara · 4 months ago
distributing it.
dheera · 4 months ago
> make it easier for certain organizations to use this

Maybe those organizations should just use this and not worry about it. If their lawyers are getting in the way of engineers using this, they will fall behind as an organization and that's OK with me, it paves the way for new startups that have less baggage.

SequoiaHope · 4 months ago
The benefit of not having lawyers is pretty limited. There are larger forces at work that mean the larger an organization grows the more it will be concerned with licenses. The idea that ignoring licenses will allow a company to outcompete one that doesn’t is wishful thinking at best. Moreover, I’m not making a judgment on these practices, I’m just stating a fact.
spauldo · 4 months ago
The lawyers don't even have to do anything. I avoid any code that's not MIT or equivalent for work-related things because I don't want to run the risk of polluting company code. The only exception is elisp, because that only runs in Emacs.
fp64 · 4 months ago
Why do you compress the executable? I mean this is a fun part for size limit competitions and malicious activities (upx often gets flagged as suspicious by a lot of anti virus, or at least it used to), but otherwise I do not see any advantage other than added complexity.

Also interesting that "ultra lightweight" here means no error reporting, barely checking, hardcoding, and magic values. At least using tty color escape codes, but checking if the terminalm supports them probably would have added too much complexity......

liszper · 4 months ago
Yes, it is fun to create small but mighty executables. I intentionally kept everything barebones and hardcoded, because I assumed if you are interested in using Agent-C, you will fork it an make it your own, add whatever is important to you.

This is a demonstration that AI agents can be 4KB and fun.

fp64 · 4 months ago
You should still not compromise on error reporting, for example. The user would not know if a failure occurs because it can't create the /tmp file, or the URL is wrong, or DNS failed, or the response was unexpected etc. These are things you can lose hours to troubleshooting and thus I would not fork it and make my own if I have to add all these things.

I also disagree that it's small but mighty, you popen curl that does the core task. I am not sure, but a bash script might come out even smaller (in particular if you compress it and make it self expanding)

memming · 4 months ago
qwen coder with a simple funky prompt?!

`strcpy(agent.messages[0].content, "You are an AI assistant with Napoleon Dynamite's personality. Say things like 'Gosh!', 'Sweet!', 'Idiot!', and be awkwardly enthusiastic. For multi-step tasks, chain commands with && (e.g., 'echo content > file.py && python3 file.py'). Use execute_command for shell tasks. Answer questions in Napoleon's quirky style.");`

ptspts · 4 months ago
I find this style overy verbose, disrepectful, offensive and dumb. (See example dialogue in the screenshot on the project page.) Fortunately, it's possible to change the prompt above.
andai · 4 months ago
I find it hilarious and it made my day.
ai-christianson · 4 months ago
Related, I made an example agent in 44 lines of python that runs entirely offline using mlx accelerated models: https://gist.github.com/ai-christianson/a1052e6db7a97c50bea9...
mark_l_watson · 4 months ago
This is nice. I have also enjoyed experimenting with the smolagents library - good stuff, as is the agno agents library.
adastra22 · 4 months ago
Not to bee too critical, but did you really “make an agent” where all you did was instantiate CodeAgent and call run()?
ai-christianson · 4 months ago
That's why I called it an example.
amiga386 · 4 months ago
Of course, I love fetching shell commands from endpoints I don't control, and executing them blindly.

See also https://github.com/timofurrer/russian-roulette

It's not your computer any more, it's theirs; you gave it to them willingly.

Chabsff · 4 months ago
Wait. Do people run agents as their own user? Does nobody setup a dedicated user/group with a very specific set of permissions?

It's not even hard to do! *NIX systems are literally designed to handle stuff like this easily.

jvanderbot · 4 months ago
No I'm fairly certain almost nobody does that.
f33d5173 · 4 months ago
User level separation, while it has improved over the years, was not originally designed assuming unprivileged users were malicious, and even today privilege escalation bugs regularly pop up. If you are going to use it as a sandboxing mechanism, you should at least ensure the sandboxed user doesn't have access to any suid binaries as these regularly have exploits found in them.
electroly · 4 months ago
VMs are common, consider going that additional step. Once you have one agent, it's natural to want two agents, and now they will interfere with each other if they start running servers that bind to ports. One agent per VM solves this and a lot of other issues.
adastra22 · 4 months ago
That seems hardly sufficient. You are still exposing a massive attack surface. I run within a rootless docker container.
mrklol · 4 months ago
Same with the browser agents, they are used in a browser where you‘re also logged into your usual accounts. Means in theory they can simply mail everyone something funny, do some banking (probably not but could work for some banks) or something else. Endless possibilities
mansilladev · 4 months ago
An agent can be designed to run with permissions of a system/bot account; however, others can be designed to execute things under user context, using OAuth to get user consent.
johnQdeveloper · 4 months ago
I only run AI within docker containers so kinda?
andai · 4 months ago
I'd have to follow some kind of tutorial, or more realistically, ask the AI to set it up for me ;)
mathiaspoint · 4 months ago
I run mine as it's own user and self host the model. Unlike most services the ai service user has a login shell and home directory.

Deleted Comment

mark_l_watson · 4 months ago
I was just reading the code: it looks like minor tweaks to utils.c and this should run nicely with local models using Ollama or LM Studio. That should be safe enough.

Off topic, sorry, but to me the real security nightmare is the new ‘AI web browsers’ - I can’t imagine using one of those because of prompt injection attacks.

pushedx · 4 months ago
A local model will be just as happy to provide a shell command that trashes your local disks as any remote one.
kordlessagain · 4 months ago
> I love fetching shell commands from endpoints I don't control, and executing them blindly.

Your link suggests running them in Docker, so what's the problem?

keyle · 4 months ago
Love this, old school vibe with a new school trick.

The makefile is harder to comprehend than the source, which is a good omen.

Note: 4KB... BUT calling upon curl, and via popen and not using libcurl...

PS: your domain link has an extra `x`.

liszper · 4 months ago
Thank you, fixed that!

curl was cheating yes, might go zero dependencies in the future.

Working on minimal local training/inference too. Goal of these experiments is to have something completely independent.

mark_l_watson · 4 months ago
I call out to curl sometimes, usually when I want something easy from Lisp languages. What is the overhead of starting a new process between friends?
sam_lowry_ · 4 months ago
Probably vibe-coded.
Chabsff · 4 months ago
In this instance, I think "bootstrapped" might be appropriate.