OMG I thought I was going crazy thinking it should work. I posted on stack overflow and received this reasoning. I had already implemented the change in code. Thankyou so much for this update.
Assuming you are using only the "stardard" C++ opencv you can try OCL: http://opencv.org/platforms/opencl.html
or, if possible, implement some parallelization using: http://www.cplusplus.com/reference/thread/thread/
I mean, without knowing more, I can't say. But the article I posted above certainly suggests even YOLO tiny struggles on an RPi.
On a video file:
>Our combination of Raspberry Pi, Movidius NCS, and Tiny-YOLO can apply object detection at the rate of ~2.66 FPS.
On a live video stream:
Notice that processing a camera stream leads to a higher FPS (~4.28 FPS versus 2.66 FPS respectively).
That's WITH a $150 Movidius NCS2 ( https://www.amazon.com/Intel-Neural-Compute-Stick-2/dp/B07KT6361R )
Without an NCS2 I expect it would be 10-20x slower.
These changes for the version 3 are close to the same as this v2.4 cheat sheet. Hopefully these docs are somewhat useful.
> You can also sorta cheat and and double your data flipping the images
You can also just use the PyTorch transformer pipelines, e.g. https://pytorch.org/docs/stable/torchvision/transforms.html
transformer = transforms.Compose([ transforms.RandomAffine(degrees=15) transforms.RandomHorizontalFlip(p=0.5) transforms.Resize(340), transforms.CenterCrop(320), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) ])
You can point that and the ImageLoader
to a directory, and it will just randomly create thousands of permutations of each input image, randomly skewing/rescaling/flipping and normalizing each color channel to the same mean & stdev
There are tons of models that don't require annotated training data.
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
This example uses Resnet18 - you basically just need to have your images in 4 folders: "Train - At Desk", "Train - No Desk", "Valid - At Desk", "Valid - No Desk"
Unless you have a lot of different desks and people as training data I suspect you will highly prone to overfitting though.
#!/usr/bin/env python # ---- encoding: utf-8 ----
import numpy as np import cv2
height = 400 width = 43388 black = (0, 0, 0) gray = (160, 160, 160)
im = np.zeros((height, width, 3), np.uint8) im[:] = gray #this 6000 pixel wide line will be drawed cv2.line(im,(0, (height/3)*2),(6000, (height/3)*2), black, 3) #this 43388 pixel wide line will NOT be drawed cv2.line(im,(0, height/3),(width, height/3), black, 3) cv2.imwrite('Line_on_huge_image.bmp', im)
1 graph tales 52.065664 mb in RAM, sorry it's the whole stack which takes 170Mb
Allright I looked at https://plot.ly/~jackp/17421/plotly-candlestick-chart-in-python/ and it seems it can do what I want.
However I just need to be able to draw this line and Everything I want would be ok. Plus I'm not sure plotty will be able to generate a readable graph with 9000 candles.
I haven't used one, but there is the NVIDIA Jetson Nano. There are higher end Jetsons, but they are over $300. You'll find tutorials to get going with OpenCV and other frameworks by just Googling for them.
It looks like this is what I forgot to order:
vs
This IP Cam: https://www.cctvcameraworld.com/1080p-2-megapixel-ip-ptz-camera-outdoor-ip67.html
Both are has 60fps framerate and are full hd.