We are all essentially evil because
1) tomorrow will find some behavior that is accepted today as bad and we are all doing it
2) creating an AI that can manipulate people is too tempting for the predator in us to avoid (this is a REAL turing test - making an AI that can make tools out of humans and everything else.)
The only way to stay ahead of the 'evil' corrupting influence of new tech is to prevent it from widespread use controlled by a single entity. So, yolo is ok as long as you cannot deploy it in a cloud at scale.
So, just as nuclear weapons (the massive concentration of energy release at tremendously fast rates) are bad, so is a super AI/AGI (the massive computational ability at nanosecond scale).
No evil was ever perpetrated by institutions of learning - only business entities and governments who scaled up those discoveries caused evil.
And now for the flame bait-
So, by this argument we should elect Luddites to govern us , especially ones that are not imaginative or creative.
Focusing too much on negative aspect of thing will lead you nowhere.
Manhattan project gave >500 research papers. Gave us Iodine 131 and other radio nucleotide which we use in medicine. And gave us lasting peace. So was it bad or not?
Is gene editing bad? was the internet bad? Was the dude who invented round wheel bad?
Yes, using atomic bombs was bad. As others say it is debatable if it helped end the (already won?) war. And in any case, _lasting_ _piece_ where? Are middle east and Africa not part of this world?
I find your comment is entirely missing the point and low effort. Asking whether random techniques, inventions, inventors were "bad" or not makes us much sense as asking:
Is the sun bad? Were dinosaurs bad? Are the aliens bad? Is life bad?
one human brain can only perform some 10-12 hrs of facial recognition work before needing to take a break, and also do so relatively low speeds.
One computer brain can be copied for free, and deployed to thousands of computer clusters and work 24/7 on facial recognition, at a fraction of the cost.
I'm looking for something to run on a Raspberry Pi, to detect humans on a security camera. The built-in camera software has false triggering, esp. on windy days.
When looking at these projects, how do I figure out what hardware they're aimed at? This one mentions NVidia/CUDA.
Is there any sort of hardware abstraction layer that YOLO or R-CNNs can operate on? Can I use any of this code (or models) for my R-Pi?
'round about 2004 I built a pentium based 'motion' recorder. It kept a circular buffer of images that were spooled to the output stream when motion was detected. Motion was determined by optical flow iirc - the OpenCV call returned an array of blob center points, size, and velocity vector. If the blob was large enough and the velocity vector made sense (eg, horizontal as in walking or driving at an appropriate magnitude) it was considered motion. Reduced leaf flutter, branch waving false positives to effectively zero. No ML required.
ML is too liberally applied without understanding how or why it has triggered.
I've lost track of the original quote but the spirit of it is: "It's artificial intelligence while we don't understand it. Once we understand it, it's computer science"
I use the Movidius NCS and it's pretty much the optimal solution for light OpenCV work. It draws very little power, but as the Pi itself is powered by USB, don't expect to run much else off of USB on it.
I use it in the exact same scenario, a Raspberry Pi (Zero W) with a camera with motion detection and notifications on movement, my implementation may be specific though.
Each of my Raspberry Pi cameras runs motion (https://motion-project.github.io/index.html), and recorded files are stored on a NFS share. Each camera has it's own directory within this share (or rather each camera has it's own share within a parent directory), and the server then runs a python script that monitors for changed/added files, and runs object detection on the newly created/changed files.
If a person is detected in the file, it then proceeds to create a "screenshot" of the frame with the most/largest bounding box, and sends a notification through Pushover.net including the screenshot with bounding box.
There implementation is not quite as simple as described here, i.e. i use a "notification service" listening on MQTT for sending pushover notifications, but the gist of it is described above.
Edit: I should probably clarify that my cameras are based on Raspberry Pi Zero W. They have enough power to run motion at 720p - at around 30fps. Not great, but good enough for most applications. I've since migrated most to Unifi Protect instead. A little higher hardware cost, a lot better quality :)
I love how every new YOLO project inevitably leads to the discussion of the ethics. At the very least more people will be wondering if they should also be taking ethics into consideration wrt their lines of work.
Pjreddie is a giant for this. It is a real contribution.
Trying to think of some applications for this. For example one could create a mechanism that watched people entering and exiting a shop providing the shop owner more quantitative data that he could use to optimize his sales.
Or you could have it watch a soccer game. Generating all sorts of data on how the game went.
Entering/exiting buses for automatic passenger counters is more important than ever now. Being able to broadcast GTFS-Occupancy in real-time when only 50% (or less) of the bus can be filled with passengers, is a real issue transit is facing today.
"Bflops"? I'm guessing this is a measure of the total processing power needed, in billions of floating-point ops, and not a measure of operations per second?
The only way to stay ahead of the 'evil' corrupting influence of new tech is to prevent it from widespread use controlled by a single entity. So, yolo is ok as long as you cannot deploy it in a cloud at scale.
So, just as nuclear weapons (the massive concentration of energy release at tremendously fast rates) are bad, so is a super AI/AGI (the massive computational ability at nanosecond scale).
No evil was ever perpetrated by institutions of learning - only business entities and governments who scaled up those discoveries caused evil.
And now for the flame bait- So, by this argument we should elect Luddites to govern us , especially ones that are not imaginative or creative.
Furthermore, autofocus has already progressed from face detection to eye detection.
Is that pjreddie used horses, dogs, and bicycles as training data? Not realising that his technology could also be used on human faces?
Deleted Comment
Deleted Comment
Focusing too much on negative aspect of thing will lead you nowhere.
Manhattan project gave >500 research papers. Gave us Iodine 131 and other radio nucleotide which we use in medicine. And gave us lasting peace. So was it bad or not?
Is gene editing bad? was the internet bad? Was the dude who invented round wheel bad?
I find your comment is entirely missing the point and low effort. Asking whether random techniques, inventions, inventors were "bad" or not makes us much sense as asking:
Is the sun bad? Were dinosaurs bad? Are the aliens bad? Is life bad?
I think the jury is still out on that. Or rather the trial is still underway.
One computer brain can be copied for free, and deployed to thousands of computer clusters and work 24/7 on facial recognition, at a fraction of the cost.
When looking at these projects, how do I figure out what hardware they're aimed at? This one mentions NVidia/CUDA.
Is there any sort of hardware abstraction layer that YOLO or R-CNNs can operate on? Can I use any of this code (or models) for my R-Pi?
https://github.com/blakeblackshear/frigate
I remember a professor saying, "The definition of AI is: something that doesn't work"
I use it in the exact same scenario, a Raspberry Pi (Zero W) with a camera with motion detection and notifications on movement, my implementation may be specific though.
Each of my Raspberry Pi cameras runs motion (https://motion-project.github.io/index.html), and recorded files are stored on a NFS share. Each camera has it's own directory within this share (or rather each camera has it's own share within a parent directory), and the server then runs a python script that monitors for changed/added files, and runs object detection on the newly created/changed files.
If a person is detected in the file, it then proceeds to create a "screenshot" of the frame with the most/largest bounding box, and sends a notification through Pushover.net including the screenshot with bounding box.
There implementation is not quite as simple as described here, i.e. i use a "notification service" listening on MQTT for sending pushover notifications, but the gist of it is described above.
Edit: I should probably clarify that my cameras are based on Raspberry Pi Zero W. They have enough power to run motion at 720p - at around 30fps. Not great, but good enough for most applications. I've since migrated most to Unifi Protect instead. A little higher hardware cost, a lot better quality :)
[1] https://www.seeedstudio.com/Sipeed-Maix-Cube-p-4553.html
Pjreddie is a giant for this. It is a real contribution.
Where can the "Easy Set, "Medium Set, and "Hard Set" evaluations referenced in the "Wider Face Val" be found?
Trying to think of some applications for this. For example one could create a mechanism that watched people entering and exiting a shop providing the shop owner more quantitative data that he could use to optimize his sales.
Or you could have it watch a soccer game. Generating all sorts of data on how the game went.
All on relative cheap piece of hardware.
https://andrewnc.github.io/projects/projects.html#heart-rate
Plain-text XML for the frontal face detector is 912 KB. 132 KB gzipped. It should be smaller in binary.