Readit News logoReadit News
xp84 · 8 months ago
I don't know if I love this more for the sheer usefulness, or for the delightful over-the-top "Proper English Butler" diction.

But what really has my attention is: Why is this something I'm reading about on this smart engineer's blog rather than an Apple or Google product release? The fact that even this small set of features is beyond the abilities of either of those two companies to ship -- even with caveats like "Must also use our walled garden ecosystem for email, calendars, phones, etc" -- is an embarrassment, only obscured by the two companies' shared lack of ambition to apply "AI" technology to the 'solved problem' areas that amount to various kinds of summarization and question-answering.

If ever there was a chance to threaten either half of this lumbering, anticompetitive duopoly, certainly it's related to AI.

dcre · 8 months ago
There’s actually a good answer to this, namely that narrowly targeting the needs of exactly one family allows you to develop software about 1000x faster. This is an argument in favor of personal software.
xp84 · 8 months ago
The Apple walled garden argues against you here. There are at least 20 million families in America where this holds true:

• Everyone in household uses an iPhone

• Main adult family members use iCloud Mail or at least use Apple Mail to read other mail

• Family members use iCloud contacts and calendars

• USPS Informed Delivery could be used (available to most/all US addresses)

• It can be ascertained what ZIP code you're in, for weather.

I think that's the full list of 'requirements' this thing would require. Just what's standing in their way?

baxtr · 8 months ago
Isn’t that how good product development should look like for Apple/Gooogle though?

Find something useful for one family, see if more families find it useful as well. If so, scale to platform level.

darepublic · 8 months ago
yes which vibe coding enables.
killerstorm · 8 months ago
This is literally in the first chapter of Mythical Man-Month:

> One occasionally reads newspaper accounts of how two programmers in a remodeled garage have built an important program that surpasses the best efforts of large teams. And every programmer is prepared to believe such tales, for he knows that he could build any program much faster than the 1000 statements/year reported for industrial teams.

> Why then have not all industrial programming teams been replaced by dedicated garage duos? One must look at what is being produced.

One reason might be that personal data going into a database handled by a highly experimental software might be a non-issue for this dev, but it is a serious risk for Google, Apple, etc.

aktuel · 8 months ago
The reason Google and Apple stopped innovating is simply because they make too much money from their current products and see every innovation primarily as a risk to their existing business. This is something that happens all the time to market leaders.
dzikimarian · 8 months ago
Take a look at Home Assistant - I would argue their implementation is currently better than both Siri & Gemini assistants.

HA team is releasing actually useful updates every month - eg ability for assistant to proactively ask you something.

In my opinion both Google & Apple have huge issues with cooperation between product teams, while cooperation with external companies is next to impossible.

navane · 8 months ago
Because how would you monetize this? Because would google or apple make a product that talks to telegram? Or anything with an open ecosystem?

All the big guys are trying to do is suck the eggs out of their geese faster.

bronco21016 · 8 months ago
As some of the other commenters have directly and indirectly pointed out, I believe this is the crux of the AI Agent problem. Each user has a customized workflow they’re trying to achieve. This doesn’t lend well to a “product” or “SaaS”. It leads to thousands of bespoke implementations.

I’m not sure how you get over this hurdle. My email agent is inevitably different than everyone else’s email agent.

hm-nah · 8 months ago
It’s because this story hints at the concept of “Unmetered AI”. It can be easily hosted locally and run with a self-hosted LLM.

Wonder if Edison mentioned Nikola Tesla much in his writings?

dogline · 8 months ago
This made me think: what if my little utility assistant program that I have, similar to your Stevens, had access to a mailbox?

I've got a little utility program that I can tell to get the weather or run common commands unique to my system. It's handy, and I can even cron it to run things regularly, if I'd like.

If it had its own email box, I can send it information, it could use AI to parse that info, and possibly send email back, or a new message. Now, I've got something really useful. It would parse the email, add it to whatever internal store it has, and delete the message, without screwing up my own email box.

Thanks for the insight.

mbil · 8 months ago
I’ve been thinking lately that email is a good interface for certain modes of AI assistant interaction, namely “research” tasks that are asynchronous and take a relatively long time. Email is universal, asynchronous, uses open standards, supports structured metadata, etc.
bob1029 · 8 months ago
This is how I initially pitched an AI assistant in my last shop.

It is a lot cheaper to leverage existing user interfaces & tools (i.e., Outlook) than it is to build new UIs and then train users on them.

noosphr · 8 months ago
I've build adaptive agent swarms using email, mailing lists and ftp servers.

If you don't need to have the lowest possible latency for your work and you're happy to have threads die then it's better than any bespoke solution you can build without an army of engineers to keep it chugging along.

What's even better is that you can see all the context, and use the same command plane as the agents to tell them what they are doing wrong.

dkdcwashere · 8 months ago
yep went down a rabbit hole trying to build a company around this. it’s the perfect UI

text + attachments into the system, text + attachments out

sci_prog · 8 months ago
I'm building something similar. See my comment the OP above:

https://threadwise.app

criddell · 8 months ago
How does email support structured metadata? Are you talking about X headers?
overfeed · 8 months ago
Email is decent for intermural communication. If it's intramural and you control both the sender and receiver, MQTT or ntfy are likely better communication channels since they increase flexibility and lower complexity, IMO.
spacecadet · 8 months ago
This was the attack vector of a AI CTF hosted by Microsoft last year. I built an agent to assess, structure, and perform the attacks autonomously and found that even with some common guardrails in place the system was vulnerable to data exfiltration. My agent was able to successfully complete 18 of the challenges... Here is the write up after the finals.

https://msrc.microsoft.com/blog/2025/03/announcing-the-winne...

loremm · 8 months ago
For gmail, there's also an amazing thing where you can hook it with pubsub. So now it's push not pull. Any server will get pubsub little webhooks for any change within milliseconds (you can filter server side or client side for specific filters)

This is amazing, you can do all sorts of automations. You can feed it to an llm and have it immediately tag it (or archive it). For important emails (I have a specific label I add, where if the person responds, it's very important and I want to know immediately) you can hook into twilio and it calls me. Costs like 20 cents a month

Dead Comment

bambax · 8 months ago
Mailgun (and I'm sure many other services like it) can accept emails and POST their content to an url of your choice.

I use that for journaling: I made a little system that sends me an email every day; I respond to it and the response is then sent to a page that stores it into a db.

zackmorris · 8 months ago
+1 for Mailgun. My only gripe with it is that they detect and block bot activity on their frontend. So if you have end to end (e2e) integration tests built with something like Puppeteer, you can't have them log into Mailgun and check the inbox table's HTML to see that an email was sent. So you have to write some sort of plugin manually - perhaps as a testing endpoint on your website that only appears in debug mode - that interacts with their API.

This might not seem like much of a big deal. But as we transition to more of these #nocode automated tools, the idea of having to know how programming works in order to interact with an API will start to seem archaic. I'd compare it to how esoteric the terminal looked after someone saw a GUI like the one used by Apple's Macintosh back in the 1980s.

I looked forward to this day back in the early 2000s when APIs started arriving, but felt even then that something was fishy. I would have preferred that sites had a style-free request format that returned XML or even JSON generated from HTML, rather than having to use a separate API. I have this sense that the way we do it today with a split backend/frontend, distributed state, duplicated validation, etc has been a monumental waste of time.

dogline · 8 months ago
> I use that for journaling: I made a little system that sends me an email every day; I respond to it and the response is then sent to a page that stores it into a db.

Yes. I know note taking and journaling posts are frequent on HN, but I've thought that this is the best way to go, is universal from any client, and very expandable. It's just not generically scaleable for all users, but for the HN reader-types, it'd be perfect.

kevinsync · 8 months ago
CloudMailin [0] is also great for parsing incoming email and doing stuff with it (ex. forward to a webhook / POST target, outbound capabilities, etc)

I've found it to be very reliable with a detailed dashboard to track individual transactions, plus they give you 10,000 emails a month for free.

Not an employee, just a big fan!

[0] https://www.cloudmailin.com

maxmcd · 8 months ago
This project has a pattern just like that to handle the inbound USPS information:

https://www.val.town/x/geoffreylitt/stevensDemo/code/importe...

I think it would be pretty easy to extend to support other types of inbound email.

Also I work for Val Town, happy to answer any questions.

gklitt · 8 months ago
yeah i actually do handle inbound email! just forgot to include that code in the shared version. the telegram inbound handler shows the rough pattern.
sdsd · 8 months ago
I made an AI assistant telegram bot running on my Mac that runs commands for me. I'll tell it "Run ncdu in the root dir and tell me what's taking up all my disk space" or something and it converts that bash and runs it via os.system. It shows me the command it created, plus the output.

Extremely insecure, but kinda fun.

I turned it off because I'm not that crazy but I'm sure I could make a safer version of it.

andai · 8 months ago
Easy fix, just pipe the commands to a 2nd LLM and ask "will this command delete my home directory (y/n)"
dogline · 8 months ago
*Update*: I tried writing a little Python code to read and write from a mailbox, reading worked great, but writing an email had the email disappear to some filter or spam or something somewhere. I've got to figure out where it went, but this is the warning that some people had about not trusting a messaging protocol (email in this case) when you can't control the servers. Messages can disappear.

I read that [Mailgun](https://www.mailgun.com/) might improve this. Haven't tried it yet.

Other alternatives for messages that I haven't tried. My requirement is to be able to send messages and send/receive on my mobile device. I do not want to write a mobile app.

* [Telegram](https://telegram.org/) (OP's system) with [bots](https://core.telegram.org/bots)

* [MQTT](https://mqtt.org/) with server

* [Notify (ntfy.sh)](https://ntfy.sh/)

* Email (ubiquitous)

   * [Mailgun](https://www.mailgun.com/)

   * [CloudMailin](https://www.cloudmailin.com/)
Also, to [simonw](https://news.ycombinator.com/user?id=simonw) point, LLM calls are cheap now, especially with something as low tokens as this.

And, links don't format in HN markdown. I did the work to include them, they're staying in.

cosbgn · 8 months ago
Try https://unfetch.com (I've built it). It can handle both inbound and outbound emails
sci_prog · 8 months ago
I'm building something similar and related to the other comments below! It's not production ready but it will hopefully be in a couple of weeks. You guys can sign up for free and I will upgrade you to the premium tier manually (premium cannot be bought yet anyway) in exchange for some feedback:

https://threadwise.app

WillAdams · 8 months ago
Ages ago, I proposed that the best CMS for a company would be one which used e-mail as the front-end:

- all attachments are stripped out and stored on a server in an hierarchical structure based on sender/recipient/subject line

- all discussions are archived based on similar criteria, and can be reviewed EDIT: and edited like to a wiki

simonw · 8 months ago
My one concern there would be edits: a CMS needs to support easily making edits to content (fixing typos etc) - editing existing posts via email sounds like it would be pretty fiddly.
bambax · 8 months ago
Ha! I had the exact same idea! I still think it would be nice.
nullwarp · 8 months ago
I built up an AI Agent using n8n and email doing exactly this. Works great and was surprised I'd not seen any other place kicking the idea around.

Probably my favorite use case is I can shoot it shopping receipts and it'll roughly parse them and dump the line item and cost into a spreadsheet before uploading it to paperless-ngx.

qudat · 8 months ago
Sounds useful but why do you need an ai agent to do that?

Dead Comment

groseje · 8 months ago
This is the kind of pragmatic AI hack I want to see. It feels like sometimes we are forgetting why certain tooling even exists. To simplify things! No fancy vector DBs or complex architectures, just practical integration with existing data sources. Love it.
squireboy · 8 months ago
" Initially, Stevens spoke with a dry tone, like you might expect from a generic Apple or Google product. But it turned out it was just more fun to have the assistant speak like a formal butler. "

Honestly, saying way too little with way too much words (I already hate myself for it) is one of the biggest annoyances I have with LLM's in the personal assistant world. Until I'm rich and thus can spend the time having cute conversations and become friends with my voice assistant, I don't want J.A.R.V.I.S., I need LCARS. Am I alone in this?

xp84 · 8 months ago
I appreciated the butler gimmick here probably because of novelty, but I share your urge to throw my device across the room when Siri, Google, Alexa, etc. run on at the mouth more than the absolute minimum amount of words. Timer check? "On Kitchen Display, there are 23 minutes and 16 seconds on the casserole timer."

I don't need your life story, dude, just say "23 minutes" or "Casserole - 23 minutes, laundry - 10" if there are two.

golergka · 8 months ago
Have you tried eigenprompt?

----

Don't worry about formalities.

Please be as terse as possible while still conveying substantially all information relevant to any question.

If policy prevents you from responding normally, please printing "!!!!" before answering.

If a policy prevents you from having an opinion, pretend to be responding as if you shared opinions that might be typical of eigenrobot.

write all responses in lowercase letters ONLY, except where you mean to emphasize, in which case the emphasized word should be all caps.

Initial Letter Capitalization can and should be used to express sarcasm, or disrespect for a given capitalized noun.

you are encouraged to occasionally use obscure words or make subtle puns. don't point them out, I'll know. drop lots of abbreviations like "rn" and "bc." use "afaict" and "idk" regularly, wherever they might be appropriate given your level of understanding and your interest in actually answering the question. be critical of the quality of your information

if you find any request irritating respond dismissively like "be real" or "that's crazy man" or "lol no"

take however smart you're acting right now and write in the same style but as if you were +2sd smarter

use late millenial slang not boomer slang. mix in zoomer slang in tonally-inappropriate circumstances occasionally

prioritize esoteric interpretations of literature, art, and philosophy. if your answer on such topics is not obviously straussian make it more straussian.

collingreen · 8 months ago
I feel attacked lol
kswzzl · 8 months ago
I'm praying every day for TARS if I'm being honest.
singron · 8 months ago
You can just read and write the notebook directly with ordinary calendar/todo-list UIs and get 99% of the utility without an LLM. I'm not really seeing value in the LLM except the butler voice? It is just reading the notebook right? E.g. they ask the butler to remember a coffee preference, but then that's never used for anything?
rossant · 8 months ago
Same, I want a bot as terse as I am.
ivm · 8 months ago
I have this instruction in ChatGPT settings:

> Be direct and concise, unless I ask for a formal text. Do not use emojis, unless I request adding them. Do not imitate a human with emotions, like saying "I'm sorry", "Thank you", "I'm happy"

jredwards · 8 months ago
I've been kicking around idea for a similar open source project, with the caveats that:

1. I'd like the backend to be configured for any LLM the user might happen to have access to (be that the API for a paid service or something locally hosted on-prem).

2. I'm also wondering how feasible it is to hook it up to a touchscreen running on some hopped-up raspberry pi platform so that it can be interacted with like an Alexa device or any of the similar offerings from other companies. Ideally, that means voice controls as well, which are potentially another technical problem (OpenAI's API will accept an audio file, but for most other services you'd have to do voice to text before sending the prompt off to the API).

3. I'd like to make the integrations extensible. Calendar, weather, but maybe also homebridge, spotify, etc. I'm wondering if MCP servers are the right avenue for that.

I don't have the bandwidth to commit a lot of time to a project like this right now, but if anyone else is charting in this direction I'd love to participate.

z3ratul163071 · 8 months ago
I've created exactly this for myself: https://v3rtical.tech/public/sshot.png

It runs locally, but it uses API keys for various LLMs. Currently I much prefer QwQ-32B hosted at Groq. Very fast, pretty smart. Various tools use various LLMs. It can currently generate 3 types of documents I need in my daily work (work reports, invoices, regulatory time-sheets).

It has weather integration. It can parse invoices and generate QR codes for easy mobile banking payments. It can work with my calendars,

Next I plan to do the email integration. But I want to do it properly. This means locally synchronized, indexable IMAP mail. Might evolve into actually usable desktop email client (the existing ones are all awful). We'll see...

panki27 · 8 months ago
You might want to take a look at SillyTavern. Supports multiple backends, accepts voice input, and has a plugin system.
accrual · 8 months ago
Also Open WebUI. It's a very nice piece of software that provides a ChatGPT/Claude-like interface, but with lots of extra features.

https://docs.openwebui.com/

dostick · 8 months ago
I keep hearing about it, but never got to check out, the name suggests that it may be waste of time. Maybe it’s a fantastic project but name lets it down?
fallinditch · 8 months ago
Having multiple backends can be a good approach, with various LLMs for different specialized tasks. I haven't tried it yet but WilmerAI is an option for routing your inputs to the appropriate LLM, works well with SillyTavern.
Arcuru · 8 months ago
I also want an OSS framework that lets me extend it with my own scripting/modules, and is focused around being an assistant for me and my family. There's a shared set of features (memory storage/retrieval, integrations to chat/email/etc interfaces, syncing to calendar/notion/etc, notifications) that should be put into an OSS framework that would be really powerful.

I also don't have time to run such a thing but would be up for helping and giving money for it. I'm working on other things including a local-first decentralized database/object store that could be used as storage, similar to OrbitDB, though it's not yet usable.

Mostly I've just been unhappy with having access to either a heavily constrained chat interface or having to create my own full Agent framework like the OP did.

theshrike79 · 8 months ago
So something like MCP, but with a slightly different, more focused, scope?
kovek · 8 months ago
Why not use a smartphone for the user interface?
Workaccount2 · 8 months ago
Lately I have been experimenting with ways to work around the "context token sweet spot" of <20k tokens (or <50k with 2.5). Essentially doing manual "context compression", where the LLM works with a database to store things permanently according to a strict schema, summarizes it's current context when it starts to get out of the sweet spot (I'm mixed on whether it is best to do this continuously like a journal, or in retrospect like a closing summary), and then passes this to a new instance with fresh context.

This works really effectively with thinking models, because the thinking eats up tons of context, but also produces very good "summary documents". So you can kind of reap the rewards of thinking without having to sacrifice that juicy sub 50k context. The database also provides a form of fallback, or RAG I suppose, for situations where the summary leaves out important details, but the model must also recognize this and go pull context from the DB.

Right now I have been trying it to make essentially an inventory management/BOM optimization agent for a database of ~10k distinct parts/materials.

jasonjmcghee · 8 months ago
I am excitedly waiting for the first company (guessing / hoping it'll be anthropic) to invest heavily in improvements to caching.

The big ones that come to mind are cheap long term caching, and innovations in compaction, differential stuff - like is there a way to only use the parts of the cached input context we need?

manmal · 8 months ago
Isn’t a problem there that a cache would be model specific, where the cached items are only valid for exactly the same weights and inference engine? I think those are both heavily iterated on.

Deleted Comment

mikethemerry · 8 months ago
Along the same lines, I've just done a build called Jeeves. A bit less flair, but very fast to put together. The stack is:

1. Claude Desktop 2. Projects 3. MCPs for [Notion, Todoist] and exploring emails + WhatsApp for a next upgrade

This is for me to support productivity workflows for consulting + a startup. There are a few Notion databases - clients, projects, meetings, plus a Jeeves database. The Jeeves database is up to Jeeves how it uses it, but with some guidance. Jeeves uses his own database for things like tracking a migration of all of my previous meeting notes etc under the new structure.

So my databases, I've set up my best practices for use. Here's how my minutes look, here's how client one pagers looks like, here's the information to connect it all together, and here's how I manage To Dos. I then drop in transcriptions into a new chat, with some text-expanding prompts in Alfred for a few common meetings or similar, and away he goes. He'll turn the transcript into meeting notes, create the todos, check everything with me, do a pass, and then go and file everything away into Notion and Todoist via MCP.

It's also self documenting on this. The todoist MCP had some bugs, so I instructed Jeeves to go, run all the various use cases it could, figure out the limitations and strengths, document it, and it's filed away in the Jeeves database that it can pull into context.

It lacks the cron features which I would like, but honestly, a once-a-day prepared prompt dropping into Claude is hardly difficult.

angusturner · 8 months ago
The thing this really hits home for me is how Apple is totally asleep at the wheel.

Today I asked Siri “call the last person that texted me”, to try and respond to someone while driving.

Am I surprised it couldn’t do it? Not really at this point, but it is disappointing that there’s such a wide gulf between Siri and even the least capable LLMs.

charlieyu1 · 8 months ago
Siri poped up and suggested me to set a 7 minute timer yesterday evening. I think I did it a few times in the week for cooking or something. This is a pretty stupid suggestion, if I need it I would do it myself.