I have one of these in my bag right now, never turned on though. I understand it's some custom Android hardware and synthesizes a bunch of API-based NN service providers, and doesn't do anything on device, but I'm not too mad at that -- for this price point this year, it was always only going to be something like that. I am interested in the ongoing form factor experimentation happening, and want to encourage startups to think differently about form factors, so I'm happy to buy stuff that doesn't quite hit the mark, as long as it's got an interesting idea.
Anyway, I'm looking forward to giving it a try, and I'll almost certainly never use it once I've kind of pawed through it and seen what seems good and what seems bad. Similar for that humane pin which I think I paid for but never even received.
If you read a lot of sci-fi, authors have put an immense amount of time into thinking about what UI looks like as we have supercompute/AI access more broadly, and I think it's clear that a phone isn't the final state for these interactions. What's less clear to me is what sort of compelling mid-way points there are between what we have now and ubiquitous environment-aware AGI (e.g. Minds from Iain Banks). It's one reason I thought humane's projector was really interesting; we do a lot of visual interaction in daily life, so an audio-only earbud isn't likely to be the be-all/end-all.
Anyway, hardware startup guys, please make more of these, and figure out what's going to be compelling when we've scaled up bandwidth, scaled down latency and scaled up local compute. I'm looking forward to buying it.
To me, glasses seem like a very interesting form factor, perhaps with a smart watch or wristband for better hand gesture recognition.
You can wear them all the time, even if your hands are busy, it won't look weird, they're the perfect place for a microphone and a camera, and the frames are just in the right spot for delivering sound to your ears via bone conduction.
Input is probably going to be the biggest hurdle, just like it was with touch screens, especially in situations where using your voice isn't acceptable.
I think Google had the right idea with Google Glass, but they were far too early, just like IBM had exactly the right idea with Simon[1].
Apple Vision Pro is unbelievably good at figuring out what you’re looking at, seamlessly adjusting for eye saccades and a host of other weird things our bodies do. I imagine the tech will eventually get shrunk down.
I think the question is: what’s hardest of the following:
* Knowing what you’re looking or pointing at
* Getting you private audio
* Getting you public video
* Getting you private video
And we’ll get hardware constraints based on that list. It seems to me like private video is hardest in hardware without glasses, but public video might be harder in terms of capex - e.g. we need different materials in our spaces, or a lot of projectors, or … Anyway fun to speculate about all this.
Re Humane Ai Pin - did you never receive it because you cancelled your order? Or did you never onboard? There's a bit of an unusual process with ordering the Ai Pin where it won't ship until you "onboard" i.e. set up your account/service with Humane
What’s the goal of this? Front loading a frustrating web based setup, so the out of box experience can be smooth and seem magical? Is that the goal? Kind of like how Amazon can pre-register a Kindle with a user’s account if they select it while ordering.
You’d think this would be part of the order process instead of a completely separate and detached step.
Aside from something bad like this security hole, I can't understand the hate, it's pretty obvious the R1 was a fun novelty, a gimmick.
Anyone with a basic understanding of tech, would understand you're not going to get Apple level breakthrough engineering from a start-up and at that price point.
I feel this got swept up in the wave of hatred for the Humane AI Pin (again very obvious what it'd be). Humane's biggest fault was that they marketed it as being at Apple level quality, an impossible goal they set for themselves.
I think the hate directed their way has more to do with the outsized claims of the founder, Jesse Lyu. For example, he claimed that you're buying a device with a "LAM" but it turned out to be playwright scripts wrapping APIs + OpenAI APIs, and the underlying action model was missing (or very nascent at ship). The team also took a lot of shortcuts security-wise and is getting wrecked for it now (it's not just one hole):
Doesn't help that he was shilling crypto right before rabbit. What people love about the form factor has mostly to do with Teenage Engineering's quirky design. Somehow Jesse is a board member there, it seems.
In general I don't recommend trusting their software. If you like the hardware, you can flash AOSP and use google assistant -- it's way more capable and AOSP isn't harvesting your credentials. https://github.com/RabbitHoleEscapeR1/r1_escape
> I can't understand the hate, it's pretty obvious the R1 was a fun novelty, a gimmick.
I get pretty annoyed when people lie to me to sell me something even if it's obvious they're lying.
Even moreso if I want to try the thing they've actually built and want people experimenting with this stuff. If I didn't care I could just ignore it.
But if I do care and also I know their main claims are deceitful it means I have to waste energy wondering if the claims that aren't obvious nonsense are also deceitful.
On top of this they managed to make major news rounds with their bogus presentation. This means my family who don't usually consume tech news, and for whom none of it was obvious nonsense, got super hyped about it and I had to tell them no, this is just tech people lying to you.
We SHOULD encourage hacking on form factor, novelty gimmick devices or not. Lying and misrepresenting when doing so is the opposite of that, it makes it harder for the next folks to want to give it a go.
I think it's because of the promises of the team (new Large Action Model) vs what's actually being delivered (the model is some scripts). The team has a history of over promising and underdelivering (or scamming - depending on your perspective). It's also economically unviable. Somehow you're meant to get free LLM calls for life but there's no way for them to actually cover those. There's not really any communication about how it might be a limited time thing for early adopters or how it could ever get to be sustainable.
If they had focused on what they have, they probably could have charged the same amount and people would generally be OK with it. But they've over promised and under delivered again. I think the reaction is pretty understandable.
I speculate the hate comes from the people invested in smart phones as they are today. Magic leap didn't get mainstream hate like the rabbit did afaict
> I think it's clear that a phone isn't the final state for these interactions
That isn't clear to me at all, although continuing to experiment is worthwhile.
"Battery powered computer which fits in a pocket, having a screen/speakers/radio/microphone/camera/vibrator" is a hard form factor to beat. There's a lot of room to improve the specifics of how they work, but I would bet on something recognizably similar being in widespread use hundreds of years from now, provided that industrial civilization continues.
Voice-driven ambient computing will probably become more prominent than it is now, but that doesn't cover anything visual, so a device which can provide images on command will stay relevant. For something like a map, audio can only provide an inferior experience, and maps are one of the non-negotiable uses of a personal device, and likely to remain so.
I think watch or earbud are still the only acceptable wearable form factor. Add pinhole cameras so it's aware of its surroundings and you could have a very useful assistant that keeps up with what you're doing in realtime.
Banks and others imagine terminals as earrings, with (presumably) beam forming audio in a ‘mid tech’ scenario. How visuals get shown varies, from “holograms created by magic” to “smart walls”. We seem to be pretty close, like a year or two out, from high quality spatially aware “show the human information right here in this room” — I’d bet that a small model could do a great job at picking a place to show info right now, but we don’t really have ‘smart’ environments yet in general.
We are working on something very different. To be honest, i got mad when i saw the rabbit released. It was clear it's just a gadget & a grifter product. And i hate, that we spend much more time on really creating something and now will be much later than the grifters who rushed something.
But then i guess, they are smarter. We are running out of money (bootstrapped) & don't have a working product yet to pitch to investors. But i'd rather fail than releasing a half-assed product to make a quick-buck and lie about it constantly.
I just don't know where to find potential investors to talk to, who are up for a high risk consumer hard- & software, early stage, in europe... If anyone got some tips, they'd be greatly appreciated. Not much time left before we're dead & i think it's better to die close to a prototype than close to a potential funding.
Maybe some more (contact) information would be helpful? It is not quite clear what you are doing but it sounds interesting. If you fold, please open source, however I don’t think you have to (there is money in Europe but it depends what you need).
I honestly don't understand pretending that there was any value in this project. The founders have a history of grifting, and the product itself betrays the advertising and the software itself appears held together with shoestrings and bubblegum.
As the founders beg for a buyout even as the product has completely failed, it can't be any more clear that this was a get-rich-quick scheme by the founders, and never a serious venture.
"Experimenting with form factors" is a cop-out for what they actually did.
Never heard of this. So I went to the website to find out what it is. "Your pocket companion" the top of the website reads. Ok, don't know what that means. Scroll down "push to talk button", "conversational interface", and some other hardware features. Still no idea what it's for. They have a keynote video. I press play. It starts with them showing a bunch of press coverage and social media. Still no description of what it is. Not even a demo of what it does. I got several minutes into the video and it's all acting like I already know what it is. I've completely lost interest. Mystifyingly bad marketing.
On the technical side, it gets worse. For example, it turns out things like their Spotify "integration" is actually interacting with the Spotify web site. It doesn't even use Spotify's APIs. So it breaks any time Spotify modifies their website.
Yes; at the same time, it’s a scam only because people didn’t like it. If it got popular, it would have been a successful hustle and a new business model.
Isn’t the vision just “ai agent + playwright logged into all your accounts”? And it can’t be an app on your phone because no app store would allow that?
The hardware is far from good. I have one and got real intimate with it. The battery life is ass and holding everything back (e.g. maybe why they disable touchscreen for a lot of things?). The mediatek chip inside is really cheap and bad. There's no edge compute, so it's clear that they want it to be an interface to cloud services, but really begs the question "then why not just use your phone."
The vision is just what everyone is working on -- connecting LLMs to tools. Wake me up when it can do half of what Google Assistant can already do for me. I'll add that Google Assistant is free, and fast.
but only on the r1 subdomain, which is used for sending spreadsheets to users, so all spreadsheets can be read
+ from a researcher in discord: “we couldve also read auth.rabbit.tech […] which is basically all rabbithole password resets, aka arbitrary account takeover including admin accounts”
I had bought one but after the hackernews articles, I vowed to not open the box and send it back. I was prompted refunded and I'm glad I followed through.
My R1 is quite terrible, queries only work about half the time. When it does work it has limited information/functionality. Keeping it to hack on the hardware though, it's pretty spiffy.
I am quite happy with mine. I installed LineageOS (GSI image without Google stuff) on my Rabbit R1 - the goal is to have a less addictive mobile device that can do offline maps with GPS and otherwise be a modern iPod (listening to podcasts and music) - and it's quite good for that. Using the wheel to zoom in Organic Maps is fun :-)
Once it came to light that the Rabbit founders last venture was a blatant rug-pull Web3/NFT project it was only a matter of time before this project fell apart as well.
The claims they made about the R1s capabilities even echoed claims they had previously made about their defunct GAMA "Quantum Engine" Web3 buzzword soup.
> Later, in August 2023, the "Quantum Engine" became OS2, a personalized operating system that could do things for you like order groceries.
Anyway, I'm looking forward to giving it a try, and I'll almost certainly never use it once I've kind of pawed through it and seen what seems good and what seems bad. Similar for that humane pin which I think I paid for but never even received.
If you read a lot of sci-fi, authors have put an immense amount of time into thinking about what UI looks like as we have supercompute/AI access more broadly, and I think it's clear that a phone isn't the final state for these interactions. What's less clear to me is what sort of compelling mid-way points there are between what we have now and ubiquitous environment-aware AGI (e.g. Minds from Iain Banks). It's one reason I thought humane's projector was really interesting; we do a lot of visual interaction in daily life, so an audio-only earbud isn't likely to be the be-all/end-all.
Anyway, hardware startup guys, please make more of these, and figure out what's going to be compelling when we've scaled up bandwidth, scaled down latency and scaled up local compute. I'm looking forward to buying it.
You can wear them all the time, even if your hands are busy, it won't look weird, they're the perfect place for a microphone and a camera, and the frames are just in the right spot for delivering sound to your ears via bone conduction.
Input is probably going to be the biggest hurdle, just like it was with touch screens, especially in situations where using your voice isn't acceptable.
I think Google had the right idea with Google Glass, but they were far too early, just like IBM had exactly the right idea with Simon[1].
[1] https://en.wikipedia.org/wiki/IBM_Simon
I think the question is: what’s hardest of the following:
* Knowing what you’re looking or pointing at
* Getting you private audio
* Getting you public video
* Getting you private video
And we’ll get hardware constraints based on that list. It seems to me like private video is hardest in hardware without glasses, but public video might be harder in terms of capex - e.g. we need different materials in our spaces, or a lot of projectors, or … Anyway fun to speculate about all this.
You’d think this would be part of the order process instead of a completely separate and detached step.
Anyone with a basic understanding of tech, would understand you're not going to get Apple level breakthrough engineering from a start-up and at that price point.
I feel this got swept up in the wave of hatred for the Humane AI Pin (again very obvious what it'd be). Humane's biggest fault was that they marketed it as being at Apple level quality, an impossible goal they set for themselves.
Reminds me of Magic Leap.
https://x.com/xyz3va/status/1805689140639408277
https://x.com/xyz3va/status/1805985433156759781 (this is the OP)
https://x.com/xyz3va/status/1805993239368860069
Doesn't help that he was shilling crypto right before rabbit. What people love about the form factor has mostly to do with Teenage Engineering's quirky design. Somehow Jesse is a board member there, it seems.
In general I don't recommend trusting their software. If you like the hardware, you can flash AOSP and use google assistant -- it's way more capable and AOSP isn't harvesting your credentials. https://github.com/RabbitHoleEscapeR1/r1_escape
I get pretty annoyed when people lie to me to sell me something even if it's obvious they're lying.
Even moreso if I want to try the thing they've actually built and want people experimenting with this stuff. If I didn't care I could just ignore it.
But if I do care and also I know their main claims are deceitful it means I have to waste energy wondering if the claims that aren't obvious nonsense are also deceitful.
On top of this they managed to make major news rounds with their bogus presentation. This means my family who don't usually consume tech news, and for whom none of it was obvious nonsense, got super hyped about it and I had to tell them no, this is just tech people lying to you.
We SHOULD encourage hacking on form factor, novelty gimmick devices or not. Lying and misrepresenting when doing so is the opposite of that, it makes it harder for the next folks to want to give it a go.
And the founders asking for billion dollar buyout? Just a little joke.
And the $200 you spent on a device that doesn't do what it was advertised as? Just a prank, bro.
I think it's because of the promises of the team (new Large Action Model) vs what's actually being delivered (the model is some scripts). The team has a history of over promising and underdelivering (or scamming - depending on your perspective). It's also economically unviable. Somehow you're meant to get free LLM calls for life but there's no way for them to actually cover those. There's not really any communication about how it might be a limited time thing for early adopters or how it could ever get to be sustainable.
If they had focused on what they have, they probably could have charged the same amount and people would generally be OK with it. But they've over promised and under delivered again. I think the reaction is pretty understandable.
...to you. Someone who lurks hacker news.
Given what my some of the people I work with think AI is, I can't even imagine how bad it is out there.
That isn't clear to me at all, although continuing to experiment is worthwhile.
"Battery powered computer which fits in a pocket, having a screen/speakers/radio/microphone/camera/vibrator" is a hard form factor to beat. There's a lot of room to improve the specifics of how they work, but I would bet on something recognizably similar being in widespread use hundreds of years from now, provided that industrial civilization continues.
Voice-driven ambient computing will probably become more prominent than it is now, but that doesn't cover anything visual, so a device which can provide images on command will stay relevant. For something like a map, audio can only provide an inferior experience, and maps are one of the non-negotiable uses of a personal device, and likely to remain so.
But then i guess, they are smarter. We are running out of money (bootstrapped) & don't have a working product yet to pitch to investors. But i'd rather fail than releasing a half-assed product to make a quick-buck and lie about it constantly.
I just don't know where to find potential investors to talk to, who are up for a high risk consumer hard- & software, early stage, in europe... If anyone got some tips, they'd be greatly appreciated. Not much time left before we're dead & i think it's better to die close to a prototype than close to a potential funding.
But as i said, i'd love any tips.
As the founders beg for a buyout even as the product has completely failed, it can't be any more clear that this was a get-rich-quick scheme by the founders, and never a serious venture.
"Experimenting with form factors" is a cop-out for what they actually did.
https://www.youtube.com/watch?v=ddTV12hErTc is a review by Marques Brownlee, covering how bad it is.
https://www.youtube.com/watch?v=zLvFc_24vSM Coffezilla goes in to the ways that it's arguably a scam, as well as diving in to some of how it is badly implemented.
On the technical side, it gets worse. For example, it turns out things like their Spotify "integration" is actually interacting with the Spotify web site. It doesn't even use Spotify's APIs. So it breaks any time Spotify modifies their website.
[0] https://www.youtube.com/watch?v=zLvFc_24vSM (video)
Rabbit itself is a wrapped android app [1].
[1]: https://arstechnica.com/gadgets/2024/05/rabbit-r1-ai-box-is-...
Isn’t the vision just “ai agent + playwright logged into all your accounts”? And it can’t be an app on your phone because no app store would allow that?
The vision is just what everyone is working on -- connecting LLMs to tools. Wake me up when it can do half of what Google Assistant can already do for me. I'll add that Google Assistant is free, and fast.
No.
Rabbit data breach: all r1 responses ever given can be downloaded - https://news.ycombinator.com/item?id=40792684 - June 2024 (32 comments)
+ from a researcher in discord: “we couldve also read auth.rabbit.tech […] which is basically all rabbithole password resets, aka arbitrary account takeover including admin accounts”
I hope to find the time to blog about this soon.
https://www.xda-developers.com/rabbit-nft-company-past/
The claims they made about the R1s capabilities even echoed claims they had previously made about their defunct GAMA "Quantum Engine" Web3 buzzword soup.
> Later, in August 2023, the "Quantum Engine" became OS2, a personalized operating system that could do things for you like order groceries.