I’ve been using Frigate for six months on a raspberry pi 4 with a Google Coral TPU. It’s connected to 2 network cameras streaming in 2mp each.
Frigate standalone works super smooth with no hiccups at all. I am using object detection for people and have not yet had a false positive or false negative. Additionally, I record not only the events, but but also a 24/7 video. Frigate takes care of garbage collecting old assets.
I have it hooked up to my Home Assistant running on the same raspberry pi. From there, I get notifications to my phone which include a live video, snapshot and video recording. The UX and configuration options are way better than any commercial end user product I have found.
It’s been a literal lifesaver, also fun and easy to use. Would recommend 10 of 10. I have no affiliation with the maintainers.
I use Homeassistant and Frigate on a $100 x86 noname micro-machine with five 4K cameras and it is awesome. CPU use does not go above 10%.
Would not claim zero false positives though. However, it is smart enough to filter out non-moving false positives which many cmmercial-grade systems can not, lol.
Can't say it saved my life, but I programmed my smart bulbs to go red if a bear was spotted in any frame in the past 30 min though.
This is hilarious. I programmed my living room to go red if the air quality in my daughter’s room dropped to a certain degree. It should be annoying by now but instead it’s hilarious.
I've also used the same solution, although I ended up scaling from a Pi to a larger 13th gen Intel box with two USB Corals due to number of cameras. It's been ridiculously reliable running from a docker compose stack for years now, including using Watchtower to auto-upgrade the Frigate container. It's really easy to map the corals via docker compose as well.
It's nuts how cheaply you can make such a good system with AI-detection features, its more than paid for itself vs commercial options with monthly fees. High quality weatherproof PoE cameras are crazy affordable now too, and you can VLAN them off your home network with no connection to the internet to further harden the system.
I have almost exactly the same setup, except using the intel gpu for accelerated inference with an OpenVINO Yolo model and HA for notifications. Super reliable.
It looks awesome and I look forward to trying it out.
Curious how you claim zero false negatives though (as in missing a person it should have detected), unless you’re reviewing all the data or have another system hooked up to it verifying? Or perhaps you simply mean to imply nothing bad has happened due to a missed detection?
In curious hope it did during Halloween with all the costumes? Are you able to have it pre alert you that kids are coming to the door?
I have set up two zones per camera: The “yard” and the “entry” which is directly in front of the door. Whenever someone is at the entry, I get a notification. Since I walk these paths myself, I have hundreds of test points. Apart from that, I’m happy to _always_ get a notification before the bell rings. And I never have to open the door before I know who is there.
The “yard” part, I review every week. Here, of course, I have no data to compare it to, so I cannot say if it’s missing anything.
So yes, I’m certain it misses nothing that’s important to me - is there someone at my door and who is it.
I've been using it for 2 months now, and I strongly agree - it's very reliable, and object detection is spot on. I'm running 2 cameras with just the cpu, and still have plenty of breathing room.
Sure. The cameras are Tapo C320WS. They are cheap, waterproof, connect to wifi if there’s no ethernet and can stream video in two resolutions at the same time. I use the lower resolution for motion detection and the higher resolution for object detection and event recording.
The whole thing is running in docker-compose on Raspberry Pi and Coral TPU.
I am testing Frigate for a couple months now. It is a very ambitious project and I would love to see it succeed.
Here are the observations:
* You don’t actually need hardware decoding or a Coral, but they do help. You will of course need to provision more CPU horse-power for NVR.
* Motion detection uses the usual implementation from OpenCV. Unfortunately this algorithm is not very good in my experience. Many things I would consider as motion are missed (false negative), many things I would not consider motion are being detected (false positive). These factors mean that one is tempted to go ham on masking to filter out false positives, which then leads to further false negatives. I’m genuinely surprised the motion algorithm that’s implemented in OpenCV is still the state of art of what’s available openly.
* Object detection is somewhat knee-capped by the models available publicly. They are not very good either. Frigate has built its behaviour around these models with an assumption that these models are largely pretty accurate, which in my experience has ended up with quite a few missed recordings for important events, which led me to switch to create recordings based on motion (I’m not in a very densely populated area and reviewing the recordings isn’t too onerous.)
* Support for coral is… shaky at best. There are some indications that the production of these devices has largely stopped (and finding them to purchase is hard and expensive,) and maintenance of the drivers and libraries to interface with coral seems to be minimal or non-existent to the point where some Linux distributions have started dropping the relevant packages from their repositories. On the upside, running these models on the CPU isn’t that expensive, especially considering that the models are invoked very sparingly.
I’m currently thinking of moving over to continuous recording, perhaps trying out moonfire-nvr or mayhaps handwriting a gstreamer pipeline. Simple software -> fewer failure modes.
(NB: I worked at a computer vision startup in the past, my views are naturally influenced by that experience.)
> I’m currently thinking of moving over to continuous recording, perhaps trying out moonfire-nvr or mayhaps handwriting a gstreamer pipeline. Simple software -> fewer failure modes.
Moonfire's author here. Please do give it a try! Right now it's a little too simple even for me, lacking any real support for motion or events. [1] But I'd like to keep that simple server core that just handles the recording and database functionality, while allowing separate processes to handle Frigate-like computer vision stuff or even just on-camera motion detection, and enhance the UI and add stuff like MQTT/HA integration to support that well. I'd definitely welcome help with those areas. (And UI is really not an area of expertise of mine, as you can see from e.g. this bug: <https://github.com/scottlamb/moonfire-nvr/issues/286>.)
For now I actually run Moonfire and Frigate side-by-side. They're almost complete opposites in terms of what they support, but I find both are useful.
[1] The database schema has the concept of "signals" (timeseriesed enums like motion/still/unknown or door open/door closed/unknown), but my code to populate that based on camera or alarm system events is in my separate "playground" of half-finished stuff, and the crappy UI for it is rotting in one of my working copies. I'd like Moonfire's database/API layer to also have a more Frigate-like concept of "events" and one of "object tracks".
A killer feature would be a time series showing the rate of change from frame to frame. That would allow someone to jump to the more interesting parts of a video.
> Motion detection uses the usual implementation from OpenCV. Unfortunately this algorithm is not very good in my experience.
In frigate 0.13 (currently in beta) the motion detection has been fully rewritten, which has been a large improvement in my and other's experience. We also have docs now that walk users through tuning the motion detectoin.
This is along with many other changes along what you are describing like object tracking and improvements to initial object detection when motion is first detected.
I found motion detection to be the easy part when building my NVR. I just used trial and error and scipy filters and eventually found something I'm happy with.
Handwriting a GST pipeline is pretty much what I did. I start with frame differences(I only decode the keyframes that happen every few seconds, so motion detection has to work in a single frame to have good response time).
Then I do a greyscale erosion to suppress small bits of noise and prioritize connected regions.
After that I take the average value of all pixels, and I subtract it, to suppress the noise floor, and also possibly some global uniform illumination changes.
Then I square every pixel, to further suppress large low intensity background noise stuff, and take the average of those squares.
I mostly only run object detection after motion is detected, and I have a RAM buffer to capture a few seconds before an event occurs.
NVR device code(In theory this can be imported and run from a few like python script), but it needs some cleanup and I've never tried it outside the web server.
My CPU object detection is OK, but the public, fast, easy to run models and my limited understanding of them is the weak point. I wound up doing a bunch of sanity check post filters and I'm sure it could be done much better with better models and better pre/post filtering.
> Support for coral is… shaky at best. There are some indications that the production of these devices has largely stopped (and finding them to purchase is hard and expensive,) and maintenance of the drivers and libraries to interface with coral seems to be minimal or non-existent to the point where some Linux distributions have started dropping the relevant packages from their repositories.
I've recently gone through the process of trying to install pycoral on Rocky Linux 9. I had to build from source, and there was some challenge because documentation for the build process was sparse. There was some conflicting information about files I had to edit, values I had to set, what was supported and what wasn't.
> Object detection is somewhat knee-capped by the models available publicly. They are not very good either.
Yep, the current models are based on ImageNet which is of course wildly different content to the typical security camera. It's no surprise that its recognition is often pretty poor, especially from the typical ceiling angles that cameras are mounted at
Frigate seems like one of the most promising new NVR/VMS products out there, but still lacks the feature-completeness to replace Blue Iris. The biggest gap right now in my mind is Frigate's poor feature set for continuous recording, which seems like very basic functionality but ends up as a low priority for a lot of these "event-first" products that are more patterned off of consumer products.
My solution for this at the moment is to run a a separate NVR using continuous recording in parallel with a Frigate instance.
- Redundant disks/mirroring on the NVR
- Replication of Frigate's Event Database and Recordings to remote network storage
I primarily use Frigate as a general event index, with 'active-objects' as its recording criteria, and look at the NVR when there may be gaps in Frigate's coverage.
I've also been writing my own software to integrate with Frigate to help make better sense of activity and events at a macro level, compared to its current user interface.
I've been using it for continuous recording of my cameras. It would be working flawlessly except for the piss poor firmware of my Reolink cameras firmware causing their rtsp server to choke.
Frigate recently bundled an instance of go2rtc which can connect to Reolink cameras via http/flv and re-stream as RTSP. This solved my issues with Reolink.
go2rtc also works nicely for on demand transcoding of my H265-only cams to H264 to view the live stream in Firefox.
I use blueiris which barely works because I'm overloading it with 9 reolinks at 4k. I'd like to figure out my bottleneck but it works enough barely that it's not worth mucking with it.
Not really. Blue Iris has too many features that aren't really needed, and the lack of Linux support makes it a non starter in many cases. Also, the AI features on BI are far worse.
I'm currently using Frigate for continuous recording and it's great and I don't feel like I'm missing any features. What are some features that are missing?
>which seems like very basic functionality but ends up as a low priority for a lot of these
for security based purposes, why would you want to save all of that data that is not changing? you'll just end up fast-forwarding to the interesting bits anyways if you have to go to the footage.
Neither motion or object detection are really that reliable, in any system I've worked with. The norm in commercial systems has long been to record continuously and use motion/object detection/other classifiers to annotate the recording. That gives you the opportunity to search for events, like thefts, that may not have been detected by classification. You also have access to footage well before and after the detected event, which is often absolutely critical to answering useful questions (e.g. how did someone get past the fence?). Common patterns like 10 seconds before/30 seconds after just aren't always sufficient.
Unfortunately consumer devices are almost always cloud-based, where storage but especially upstream bandwidth are much more costly considerations, so recording only on detection has become the norm in the consumer world.
External triggers are also an important feature in commercial systems that a lot of open source projects miss---but Frigate isn't guilty of this one, it can receive triggers by MQTT, which is the same thing I do right now with Blue Iris. That's the big thing that has me optimistic about Frigate going forward. Because motion and object detection are so inconsistent, triggering VMS events based on access control systems and intrusion sensors is often a much more reliable (and even easier to maintain) approach.
Perhaps this has improved, but when I tried it out a few months ago I found that the playback for continuous recording was extremely basic and didn't have features like easy-to-use variable speed scrub to make it practical to search for things. I might try it out again today because I would like to go to something that doesn't have to run on Windows, but my use case is more around continuous recording with around a month of history than event detection.
Space management for rolling retention is also a new feature in Frigate and very basic, I don't think it has a way to do different retention policies by camera group and alarm.
OK. I have a bunch of ring cameras and cannot get them connected to Amazon anymore. The person that sold the house didn't leave the packaging, and the Amazon app does not allow connection to the temp wifi; you must scan the QR code or enter the serial number? I've never been able to get them reconnected since the Hurricane last year.
Can I somehow use these with Frigate? Is there a way to root these Ring cameras and use them?
I never liked the idea of paying a service fee, nor having Amazon pull the videos into their free neighborhood watch program.
You can’t, Amazon doesn’t support any open protocols with their ring cameras. You can get the serial number off the camera itself, usually on the back so it needs to be removed first.
Also most of those ring cameras are 2.4ghz only so you need a dedicated 2.4ghz wifi network. It won’t connect at all if you have an SSID that is broadcasting both 5ghz and 2.4ghz on the same name.
I had that problem with some Tapo smart plugs, and an ecoflow delta max battery. To connect those, I disabled SSID broadcast on 5ghz temporarily; connected the plugs to the 2.4ghz network; then re-enabled the 5ghz. Works fine now with both networks up in the same SSID.
Try registering it with your address. I can't remember if it asks you "did you just buy the house?" or not, but I had this problem and it sends an email to the prior registrant, and if they voluntarily release it, or do nothing for 30 days (IIRC), it turns the camera back over to you. I got this far with it, but I was too lazy to install the app and actually set it up.
Runs really well within a Docker container on my M1 Mac Mini with 3 2K (2560x1440p) Reolink cameras.
Paired with running Scrypted for HomeKit Secure Video (have also found using the RTSP streams rebroadcast from it to be more stable than having multiple sinks connected straight to the camera), and this makes a really good persistent NVR solution that I can also use to monitor remotely without necessarily VPN’ing back into my home network or exposing Frigate thru a separate reverse proxy.
Really the best NVR / motion detection out there. Incredibly good camera support through go2rtc and ffmpeg. Supports accelerated video codecs via ffmpeg. You can use your own Yolo weights and models for object detection. There are some that are trained for high angle person detection that are great for surveillance cameras, for example.
Frigate also has pretty solid OpenVINO support now which means accelerated inference on modern-ish intel cpu/gpus, which is a game changer when you have several cameras.
- an intel-based PC (can be a minipc, doesn't need a powerful CPU)
- a USB Coral TPU ($60)
- some wired PoE cameras (from $60 each)
My question: what do people typically use to power the cameras? A single PoE switch, or multiple PoE injectors?
My Arlo Pro 2 cameras are apparently EOL and might stop receiving free cloud services in a couple of months. So this seems like a good time to upgrade to higher resolution cameras.
(The Frigate docs advise against using Wi-Fi cameras, which would otherwise be my preference.)
I just have a PoE switch. It's actually easier to run ethernet than power, especially outside. Clogging up your WiFi spectrum with megabits of constant video seems like a terrible idea.
> Clogging up your WiFi spectrum with megabits of constant video seems like a terrible idea.
Yes, that's the thing I like about the Arlo system I have now: it has its own wifi network so, even if it's using spectrum, it's probably not affecting my LAN throughput.
> It's actually easier to run ethernet than power, especially outside.
This is true, but the house where I live already has power available everywhere I might need a camera. The thing I don't like about running new cables is the need to drill holes through exterior walls.
If you have a newer intel-based PC, you might not even need the Coral. Frigate added support for Intel's OpenVINO. They're also adding support for the RK3588's rockchip npu, but it's still newer so I wouldn't recommend unless you like tinkering.
For PoE, I'd just do whatever is convenient. I've done setups with 2 PoE switches before so I could just run one cable between the front/back and then branch out from there.
Single PoE switch with cameras on a VLAN (so they don't have internet access). I use my old framework main board (yay for reuse!). Started with a USB Coral but switched to NVMe, which is more reliable passing through to a VM.
Last time I looked at this the Coral devices were out of stock and price gouged. Looks like I can at least order now with a lead time of 22 weeks from mouser.
> what do people typically use to power the cameras? A single PoE switch, or multiple PoE injectors?
It basically doesn't matter at all - I have a mixture of both in my home, multiple PoE switches and multiple PoE injectors for things like cameras, wireless APs etc. Use whatever fits needs/budget/location, you don't have to go nuts buying a single high end PoE switch. There's often good deals to be had on used PoE switches on ebay etc too if really budget conscious.
The only real advantage of going with a single or fewer PoE switches is you have less things to put on a UPS, if you require the system to still work when power goes down. A UPS that can run say 4 cameras, the PoE switch and a system running Frigate for more than a few hours can get pretty expensive too, in my experience - most cheap UPSes are designed to get you enough power to save some files and shutdown a PC in a matter of minutes, not hours.
Cheap intel box with a Coral runs Frigate fantastically, and if a tower build plenty of room for internal storage drives.
Yeah, consumer UPSs tend to scale their inverter capacity along with their battery capacity, which means if you're shopping for huge capacity for long runtime, you end up paying extra for a huge inverter you don't need.
I've gone the other route, with a simple power supply that charges an ever-evolving fleet of whatever cheap 12-volt batteries aren't doing anything else, which then feeds DC-DC converters for the various loads. For stuff that's natively 12-volt like my wifi router and cable modem, I just run those directly off the battery rail.
This setup is quiet, efficient, and presently runs the modem, router, service pi, and my RIPE Atlas probe, for somewhere upwards of 20 hours, for something like $150. If I added a 12v-to-48v converter and a small PoE switch feeding a few cameras, it would probably cut the runtime in half, but I could just throw more battery at it for pennies on the watt-hour.
Doesn’t have to be PoE cameras. I use wifi cameras too, pretty much any camera with rtsp/onvif would work.
Chances are, a single switch is more cost effective than multiple injectors. But you also need Ethernet routed throughout your house. One alternative is to have the G.hn (powerline) adapter with PoE. This way, you can be both network and power with one plug without wiring your house.
Wi-Fi cameras are not a great idea. Sure, they are convenient, but Wi-Fi is a shared access medium (every device on, say, channel 11, has to “cooperate” with all the other devices about when it can transmit, including devices on neighboring SSIDs) and something that is constantly streaming video (or worse, multiple devices!) is going to quickly consume available bandwidth and offer a poor Wi-Fi experience. (But most people only care about convenience.) Plus, Wi-Fi is easily jammed, which is not great from a security perspective.
I have a variety of PoE power supplies based on where all my wires are running. I have one PoE switch near my main router that goes directly to a few cams. I have a second PoE switch in my living room that hooks into one in-house ethernet port and splits/powers two outdoor cams. Then I have a number of WiFi cams still where it wasn't convenient to get ethernet.
I am using it all: PoE switch, then couple injectors where it is needed for some specific reason and then also PoE splitters (one cable leaving from PoE switch, going to splitter and then to 4 different PoE cameras, powering everything with one PoE output from switch).
I would not use WiFi cameras. Standard RTSP PoE h264 is the way to go.
I have a single PoE switch for my ubiquiti cameras and polycom voip phones. My original need for the PoE switch was actually the access points and not the cameras but I slowly converted from nest to these.
Frigate standalone works super smooth with no hiccups at all. I am using object detection for people and have not yet had a false positive or false negative. Additionally, I record not only the events, but but also a 24/7 video. Frigate takes care of garbage collecting old assets.
I have it hooked up to my Home Assistant running on the same raspberry pi. From there, I get notifications to my phone which include a live video, snapshot and video recording. The UX and configuration options are way better than any commercial end user product I have found.
It’s been a literal lifesaver, also fun and easy to use. Would recommend 10 of 10. I have no affiliation with the maintainers.
Can't say it saved my life, but I programmed my smart bulbs to go red if a bear was spotted in any frame in the past 30 min though.
Frigate is great and worth of praise.
It's nuts how cheaply you can make such a good system with AI-detection features, its more than paid for itself vs commercial options with monthly fees. High quality weatherproof PoE cameras are crazy affordable now too, and you can VLAN them off your home network with no connection to the internet to further harden the system.
Curious how you claim zero false negatives though (as in missing a person it should have detected), unless you’re reviewing all the data or have another system hooked up to it verifying? Or perhaps you simply mean to imply nothing bad has happened due to a missed detection?
In curious hope it did during Halloween with all the costumes? Are you able to have it pre alert you that kids are coming to the door?
The “yard” part, I review every week. Here, of course, I have no data to compare it to, so I cannot say if it’s missing anything.
So yes, I’m certain it misses nothing that’s important to me - is there someone at my door and who is it.
How would you know if you have zero false negatives unless you watch the whole video stream everyday?
Whose lives have been saved?
> also fun
Do people actually work in security companies for the LOLz? Nobody told me.
The whole thing is running in docker-compose on Raspberry Pi and Coral TPU.
Here are the observations:
* You don’t actually need hardware decoding or a Coral, but they do help. You will of course need to provision more CPU horse-power for NVR. * Motion detection uses the usual implementation from OpenCV. Unfortunately this algorithm is not very good in my experience. Many things I would consider as motion are missed (false negative), many things I would not consider motion are being detected (false positive). These factors mean that one is tempted to go ham on masking to filter out false positives, which then leads to further false negatives. I’m genuinely surprised the motion algorithm that’s implemented in OpenCV is still the state of art of what’s available openly. * Object detection is somewhat knee-capped by the models available publicly. They are not very good either. Frigate has built its behaviour around these models with an assumption that these models are largely pretty accurate, which in my experience has ended up with quite a few missed recordings for important events, which led me to switch to create recordings based on motion (I’m not in a very densely populated area and reviewing the recordings isn’t too onerous.) * Support for coral is… shaky at best. There are some indications that the production of these devices has largely stopped (and finding them to purchase is hard and expensive,) and maintenance of the drivers and libraries to interface with coral seems to be minimal or non-existent to the point where some Linux distributions have started dropping the relevant packages from their repositories. On the upside, running these models on the CPU isn’t that expensive, especially considering that the models are invoked very sparingly.
I’m currently thinking of moving over to continuous recording, perhaps trying out moonfire-nvr or mayhaps handwriting a gstreamer pipeline. Simple software -> fewer failure modes.
(NB: I worked at a computer vision startup in the past, my views are naturally influenced by that experience.)
Moonfire's author here. Please do give it a try! Right now it's a little too simple even for me, lacking any real support for motion or events. [1] But I'd like to keep that simple server core that just handles the recording and database functionality, while allowing separate processes to handle Frigate-like computer vision stuff or even just on-camera motion detection, and enhance the UI and add stuff like MQTT/HA integration to support that well. I'd definitely welcome help with those areas. (And UI is really not an area of expertise of mine, as you can see from e.g. this bug: <https://github.com/scottlamb/moonfire-nvr/issues/286>.)
For now I actually run Moonfire and Frigate side-by-side. They're almost complete opposites in terms of what they support, but I find both are useful.
[1] The database schema has the concept of "signals" (timeseriesed enums like motion/still/unknown or door open/door closed/unknown), but my code to populate that based on camera or alarm system events is in my separate "playground" of half-finished stuff, and the crappy UI for it is rotting in one of my working copies. I'd like Moonfire's database/API layer to also have a more Frigate-like concept of "events" and one of "object tracks".
In frigate 0.13 (currently in beta) the motion detection has been fully rewritten, which has been a large improvement in my and other's experience. We also have docs now that walk users through tuning the motion detectoin.
This is along with many other changes along what you are describing like object tracking and improvements to initial object detection when motion is first detected.
Handwriting a GST pipeline is pretty much what I did. I start with frame differences(I only decode the keyframes that happen every few seconds, so motion detection has to work in a single frame to have good response time).
Then I do a greyscale erosion to suppress small bits of noise and prioritize connected regions.
After that I take the average value of all pixels, and I subtract it, to suppress the noise floor, and also possibly some global uniform illumination changes.
Then I square every pixel, to further suppress large low intensity background noise stuff, and take the average of those squares.
I mostly only run object detection after motion is detected, and I have a RAM buffer to capture a few seconds before an event occurs.
NVR device code(In theory this can be imported and run from a few like python script), but it needs some cleanup and I've never tried it outside the web server.
https://github.com/EternityForest/iot_devices.nvr/blob/main/...
GST wrapper utilities it uses, motion detection algorithms at top:
https://github.com/EternityForest/scullery/blob/Master/scull...
My CPU object detection is OK, but the public, fast, easy to run models and my limited understanding of them is the weak point. I wound up doing a bunch of sanity check post filters and I'm sure it could be done much better with better models and better pre/post filtering.
I've recently gone through the process of trying to install pycoral on Rocky Linux 9. I had to build from source, and there was some challenge because documentation for the build process was sparse. There was some conflicting information about files I had to edit, values I had to set, what was supported and what wasn't.
Yep, the current models are based on ImageNet which is of course wildly different content to the typical security camera. It's no surprise that its recognition is often pretty poor, especially from the typical ceiling angles that cameras are mounted at
- Redundant disks/mirroring on the NVR
- Replication of Frigate's Event Database and Recordings to remote network storage
I primarily use Frigate as a general event index, with 'active-objects' as its recording criteria, and look at the NVR when there may be gaps in Frigate's coverage.
I've also been writing my own software to integrate with Frigate to help make better sense of activity and events at a macro level, compared to its current user interface.
go2rtc also works nicely for on demand transcoding of my H265-only cams to H264 to view the live stream in Firefox.
for security based purposes, why would you want to save all of that data that is not changing? you'll just end up fast-forwarding to the interesting bits anyways if you have to go to the footage.
Unfortunately consumer devices are almost always cloud-based, where storage but especially upstream bandwidth are much more costly considerations, so recording only on detection has become the norm in the consumer world.
External triggers are also an important feature in commercial systems that a lot of open source projects miss---but Frigate isn't guilty of this one, it can receive triggers by MQTT, which is the same thing I do right now with Blue Iris. That's the big thing that has me optimistic about Frigate going forward. Because motion and object detection are so inconsistent, triggering VMS events based on access control systems and intrusion sensors is often a much more reliable (and even easier to maintain) approach.
Space management for rolling retention is also a new feature in Frigate and very basic, I don't think it has a way to do different retention policies by camera group and alarm.
Can I somehow use these with Frigate? Is there a way to root these Ring cameras and use them?
I never liked the idea of paying a service fee, nor having Amazon pull the videos into their free neighborhood watch program.
Any suggestions on using them now?
Also most of those ring cameras are 2.4ghz only so you need a dedicated 2.4ghz wifi network. It won’t connect at all if you have an SSID that is broadcasting both 5ghz and 2.4ghz on the same name.
Deleted Comment
Paired with running Scrypted for HomeKit Secure Video (have also found using the RTSP streams rebroadcast from it to be more stable than having multiple sinks connected straight to the camera), and this makes a really good persistent NVR solution that I can also use to monitor remotely without necessarily VPN’ing back into my home network or exposing Frigate thru a separate reverse proxy.
https://github.com/blakeblackshear/frigate/pull/8382
Frigate also has pretty solid OpenVINO support now which means accelerated inference on modern-ish intel cpu/gpus, which is a game changer when you have several cameras.
Great docs, too.
- an intel-based PC (can be a minipc, doesn't need a powerful CPU)
- a USB Coral TPU ($60)
- some wired PoE cameras (from $60 each)
My question: what do people typically use to power the cameras? A single PoE switch, or multiple PoE injectors?
My Arlo Pro 2 cameras are apparently EOL and might stop receiving free cloud services in a couple of months. So this seems like a good time to upgrade to higher resolution cameras.
(The Frigate docs advise against using Wi-Fi cameras, which would otherwise be my preference.)
Yes, that's the thing I like about the Arlo system I have now: it has its own wifi network so, even if it's using spectrum, it's probably not affecting my LAN throughput.
> It's actually easier to run ethernet than power, especially outside.
This is true, but the house where I live already has power available everywhere I might need a camera. The thing I don't like about running new cables is the need to drill holes through exterior walls.
For PoE, I'd just do whatever is convenient. I've done setups with 2 PoE switches before so I could just run one cable between the front/back and then branch out from there.
Frigate links some Dahua camera recommendations in their documentation: https://docs.frigate.video/frigate/hardware/
I installed them and they've been rock solid. Low light performance is excellent. The turret form factor is nice and unobtrusive.
https://coral.ai/products/m2-accelerator-dual-edgetpu/
It basically doesn't matter at all - I have a mixture of both in my home, multiple PoE switches and multiple PoE injectors for things like cameras, wireless APs etc. Use whatever fits needs/budget/location, you don't have to go nuts buying a single high end PoE switch. There's often good deals to be had on used PoE switches on ebay etc too if really budget conscious.
The only real advantage of going with a single or fewer PoE switches is you have less things to put on a UPS, if you require the system to still work when power goes down. A UPS that can run say 4 cameras, the PoE switch and a system running Frigate for more than a few hours can get pretty expensive too, in my experience - most cheap UPSes are designed to get you enough power to save some files and shutdown a PC in a matter of minutes, not hours.
Cheap intel box with a Coral runs Frigate fantastically, and if a tower build plenty of room for internal storage drives.
I've gone the other route, with a simple power supply that charges an ever-evolving fleet of whatever cheap 12-volt batteries aren't doing anything else, which then feeds DC-DC converters for the various loads. For stuff that's natively 12-volt like my wifi router and cable modem, I just run those directly off the battery rail.
This setup is quiet, efficient, and presently runs the modem, router, service pi, and my RIPE Atlas probe, for somewhere upwards of 20 hours, for something like $150. If I added a 12v-to-48v converter and a small PoE switch feeding a few cameras, it would probably cut the runtime in half, but I could just throw more battery at it for pennies on the watt-hour.
Wifi cameras are more for convenience than reliability or dependancy.
Chances are, a single switch is more cost effective than multiple injectors. But you also need Ethernet routed throughout your house. One alternative is to have the G.hn (powerline) adapter with PoE. This way, you can be both network and power with one plug without wiring your house.
I would not use WiFi cameras. Standard RTSP PoE h264 is the way to go.
Deleted Comment