Here’s an Unbox Therapy video
https://youtu.be/W2pYTRdX5LA?si=giLseweTP0zWf-M-
I’m sure more stuff will drop throughout the day but in case folks wanted to see a hands on.
Right now I don’t see it doing anything I can’t do from my phone already.
I’m very unclear on why these (rabbit and humane) aren’t just apps. I just don’t see people carrying these in addition to their phones and dealing with the split interaction ecosystem.
I also don’t understand why neither made their products into a watch form factor. That at least solves the majority of their issue of “having it on you in a convenient way”
Also as an aside, while I think the teenage engineering design is gorgeous, I’m disappointed that the packaging is so plastic heavy. A plastic cartridge like case wrapped in plastic. Neither of which will be used again.
Sounds significantly better than Humane
Still woefully unfinished compared to the marketing/demo hype.
And unable to answer well the question “shouldn’t my phone do this?”
But all in all I’m excited about the many attempts at hardware and interfaces — chat or voice I refuse to believe are “the best” interaction models for agents/llms.
Was it determined if the "leaked source code" for this thing was real? If so, it showed that the Rabbit R1 didn't have any AI powering it -- all rule-based heuristics (which is... fine, it's not uncommon for rule-based hueristics to be labeled as AI these days).
I don't really understand these gadgets.
The Meta Ray-Bans have multi-modal AI built in (yes, vision that it can very quickly look and explain you see, is ridiculous how good it is), can interact/notifications with your phone, and is extremely fast and reliable.
They also don't look clunky at all anymore.
I seriously wear them all the time and would never want to carry a Rabbit with me along with my phone.
Wouldn't any of these devices be better paired with some sort of app-connection to a phone? A standalone Rabbit device just seems to be under-powered and certainly improvable and capable of more by being connected to a phone with more resources...
Why would I want a Rabbit if I could get a Rabbit that also connects to my mobile computing device that is engineered for performance?
Rabbit R1 sounds much more useable and better than the significantly over-priced + subscription Humane AI Pin scam that the VCs and paid promoters were screaming about it as one of the "Best Inventions of 2023". [0]
Even if Rabbit R1 is a scam, it looks like a very good and cheap scam device for a first version that "actually works" and is more polished than Humane's AI pin.
This tells me that Humane's AI Pin performed worse than the Rabbit R1 and Humane is no where near worth >$1BN and after 5 years this is what the result of all of that is.
But so far overall, there is nothing that these devices can do better than a mobile device and an app at this time.
[0] https://time.com/collection/best-inventions-2023/6327143/hum...
There's no way I'd trust any company like this with all these credentials (however they spin it) for such a marginal convenience improvement – maybe – that I have to carry around another device for. I extremely don't get it. I am in no way a tech minimalist, but everything I have has its place, and if it gets in my way then it's shoved somewhere else very quickly. I cannot possibly imagine how this would earn its place.
Sounds like landfill.
No doubt this will be almost as successful as the Apple Vision Pro.
It's a phone, without the phone bits. Another device to carry, charge, upgrade every few years? It's not a hard no, but the barrier is also very high. Smart watches at least mostly solve the "carry" problem, as you wear it in a very normal and socially acceptable way.
Nothing about the demo addressed something that I felt was a pressing need. Neat tech concept, but not seeing the market.
Unfortunately I have no desire to talk to computers, particularly near other people
> Points at dog, "what's this", "it's a dog"
> Points at sign saying New York Stock Exchange, "what's this", "it's the NYSE"
These are impressive in that we did not have software that could do these sorts of things a few years ago, but they are functionally useless examples. People do not do this beyond writing tech reviews or a tech demo for a friend.
I'd really like to see useful demonstrations, of things that I actually want to know. For example I saw a tower on the horizon ~15 miles away and wanted to know what tower it was, I pointed my phone at it and zoomed in extremely far, and asked Google Lens, and it gave me the wikipedia page. This is materially better in two ways: first I can zoom in past all the other towers in my field of view, and second I get the wikipedia page, on a screen, I don't know what information I need about it, it sort of depends on what it is, and that page is a good place to go for that.
What other demos do we have for these devices (R1, AI Pin) of real world scenarios? So far the reporting is lacking on these, and where they have been demoed (e.g. AI Pin translation) it has been with poor results.