A different one, but the spotted lanternfly finally found my grapes. I'm at a total loss for how to protect them. The local university is studying oils and sprays, but they don't have any guidance yet.
The wheel bug is the first predator to realize the lantern fly is a tasty morsel. I hope we can continue coaching other insects to eat the invasives.
I hope your grapes make it through
Besides the famous case of the chestnut:
Dogwoods are being wiped out, mostly gone in some areas, disease originating from Asia: https://henderson.ces.ncsu.edu/2021/03/native-dogwoods-long-...
Sassafras trees wiped out by Asian beetle causing laurel wilt: https://www.lsuagcenter.com/articles/page1685633928383
American elms largely wiped out by Dutch elm disease (also actually originates from Asia) https://www.invasivespeciesinfo.gov/terrestrial/pathogens-an...
Others in this thread have talked about the threats to ash. It's very disheartening, but I guess it's the inevitable price of globalization.
Being from the US, I don't recall any such stories in the news.
Back in 2022 there was a solar powered Airbus Zephyr drone that was tested over the Southwestern US with a flight time of 64 DAYS. I wonder how this new drone is different and how a 73 hour flight is significant in comparison.
Here is an article about the Zephyr Drone and its crash that ended its nearly record-tying flight:
https://simpleflying.com/airbus-zephyr-flight-ends/
Here is a flight replay from adsbexchange showing one day's worth of its flight path where it traced out the Liberty Bell(?) and the shape of the lower 48 at nearly 70,000ft. (Scrolling through its other dates show more playful flight paths)
https://globe.adsbexchange.com/?icao=ae1313&lat=33.419&lon=-...
Is the actual UI open source, or is that something MotherDuck is allowing to be used by this while remaining proprietary? Right now it doesn't appear like this would work without an internet connection.
Maybe the closed source UI is downloaded upon first execution for installation and then cached locally?
Or is this a web app that loads from the remote URL each time?
> Unfortunately they've been stupid
They could have had an enormous amount of good will and they do nothing but burn it. Weird how they get a lot of money from google and then, while technically meeting their mission by providing a browser alternative, seem to do a lot of self-sabotage in google's favor.
I honestly think the best thing that could happen to Firefox would be for Mozilla to exactly have their funding removed, have the foundation die, and a better entity focused just on Firefox, perhaps with more earnest and honest fundraising efforts and not a multimillion CEO salary, fills the vacuum.
Stay in business, so monopoly arguments can be brushed aside.
But slowly erode privacy on the internet. And slowly lose user base.
What part of this system understands 3 dimensional space of that kitchen?
The visual model "understands" it most readily, I'd say -- like a traditional Waymo CNN "understands" the 3D space of the road. I don't think they've explicitly given the models a pre-generated pointcloud of the space, if that's what you're asking. But maybe I'm misunderstanding? How does the robot closest to the refrigerator know to pass the cookies to the robot on the left?
It appears that the robot is being fed plain english instructions, just like any VLM would -- instead of the very common `text+av => text` paradigm (classifiers, perception models, etc), or the less common `text+av => av` paradigm (segmenters, art generators, etc.), this is `text+av => movements`.Feeding the robots the appropriate instructions at the appropriate time is a higher-level task than is covered by this demo, but I think is pretty clearly doable with existing AI techniques (/a loop).
How is this kind of speech to text, visual identification, decision making, motor control, multi-robot coordination and navigation of 3d space possible locally?
If your question is "where's the GPUs", their "AI" marketing page[1] pretty clearly implies that compute is offloaded, and that only images and instructions are meaningfully "on board" each robot. I could see this violating the understanding of "totally local" that you mentioned up top, but IMHO those claims are just clarifying that the individual figures aren't controlled as one robot -- even if they ultimately employ the same hardware. Each period (7Hz?) two sets of instructions are generated. What possible combo of model types are they stringing together? Or is this something novel?
Again, I don't work in robotics at all, but have spent quite a while cataloguing all the available foundational models, and I wouldn't describe anything here as "totally novel" on the model level. Certainly impressive, but not, like, a theoretical breakthrough. Would love for an expert to correct me if I'm wrong, tho!EDIT: Oh and finally:
Is anyone skeptical? How much of this is possible vs a staged tech demo to raise funding?
Surely they are downplaying the difficulties of getting this setup perfectly, and don't show us how many bad runs it took to get these flawless clips.They are seeking to raise their valuation from ~$3B to ~$40B this month, sooooooo take that as you will ;)
https://www.reuters.com/technology/artificial-intelligence/r...
their "AI" marketing page[1] pretty clearly implies that compute is offloaded
I think that answers most of my questions.I am also not in robotics, so this demo does seem quite impressive to me but I think they could have been more clear on exactly what technologies they are demonstrating. Overall still very cool.
Thanks for your reply