arjun.a
rename files
6c7b14a
raw
history blame
6.3 kB
Ticket Name: TDA2EVM5777: Pedestrian Detection with Network File source
Query Text:
Part Number: TDA2EVM5777 Other Parts Discussed in Thread: TDA2 Hi, I want to run the "vip_single_cam_object_detection2" pedestrian detection usecase on Vayu EVM with network file source instead of camera capture. In this usecase, I replaced the "Capture" Link with "NullSource" Link with network file read with the help of "chains_networkRxDisplay" usecase. While running I am giving my raw NV12 format YUV video through the "network_tx.exe" network tool that came with Vision SDK. Problem is: if I give only a single frame of this raw video as input, it is able to detect pedestrians properly in that frame. But when I give the raw video (multiple frames) as input, it is not able to detect any pedestrian in any frame. I am able to play the video on 10" LCD. I tried to change frame rate in my .c file to 30, 1, 0.5, 0.2 FPS by setting the NullSource Param "timerPeriodMilliSecs" to corresponding time in millisec. Following is my setup: Vayu EVM (DRA74x) Rev G3 with 10" LCD without Vision Application Board. Vision SDK 2.11.00.00 with tda2xx_evm_bios_all configuration NDK_PROC_TO_USE=ipu1_0 I am sensing there is some timing related issue between the links. Can you suggest what could be the problem? Regards, Abhishek Gupta
Responses:
Hi, Abhishek, Your query has been forwarded to an expert. Regards, Mariya
Do you see output video with no pedestrains marked ? OR You do not see any video on the display ? regards Kedar
Hi Kedar, I am able to see the output video on display with no pedestrian marked. Regards Abhishek
Can you send the use-case file that you modified ? Also what is the test input you are using ? It is possible to send few frames of the test input so that we can check at our end. The pedestrian detection algorithm has some notion of history so it shows a pedestrian only if it is detected few times in a sequence of frames. Also the pedestrian detection algorithm is just demo and may not work so well on arbitrary input streams. regards Kedar
PD_e2e_post_files.zip I have attached the use-case file and 15 frames of input video that I am using. I have created the video from 15 consecutive images from the Caltech dataset which is widely used for training and testing pedestrian detection algorithms. Regards, Abhishek
hi Abhishek, We are able to recreate the issue at our end. We will get back to you on the solution. regards Kedar
I looked at the video that you shared - it is mostly a collection of random frames and not a continuous video. Since the object tracking looks at continuity for several frames, it won't be able to output a stable object location. If you don't have a proper video to give as input, you can create one from the following sequence: https://data.vision.ee.ethz.ch/cvl/aess/cvpr2008/seq03-img-left.tar.gz There are several such sequences in the following page: data.vision.ee.ethz.ch/.../ Let us know how it goes. Best regards, Manu.
Hi Manu, Thanks for your response. It was able to detect pedestrians in the sequences from the data-vision link you shared. There are some places where it is not able to detect some person or some false positives, but since these is a demo algorithm, I am ok with that. Can you tell me on which technical paper is the algorithm based on? And on what dataset it is trained on? Another thing which I could see is that if I run at more than 5 FPS, the application gets stuck after a few frames. There may be an issue with my network connection which can't support such high rates of data transfer (for 640x480@5fps, it takes ~2.3MBps). Lesser than 5 FPS, it plays fine. Regards, Abhishek
Hi Abhishek, The ACF detector is a popular pedestrian detection algorithm. You can check it out at the following link. You can also find literature references there. pdollar.github.io/.../ I am not competent enough to comment about the frame freeze - Kedar can probably help there. Btw, do you mind describing details of the application that you are targetting? Best regards, Manu.
Abhishek, Currently networkRxDisplay use case is configured to run at very low frame rate so as to work on core like M4 as well. For enabling higher frame rate in the use case application file change below to Frame rate you desire. As you are running this use-case on TDA2 A15 you should be able to set it upto 30fps Change this pPrm->timerPeriodMilliSecs = 1000; to pPrm->timerPeriodMilliSecs = 1000/30;
Hi Abhishek, we are also trying to send video through network port could you please tell me how you have linked "chains_networkRxDisplay" usecase with "vip_single_cam_object_detection2" usecase Thanks, Swati
Hi Swati, Make a copy of vip_single_cam_object_detection2 folder in <VISION_SDK>/vision_sdk/examples/tda2xx/src/usecases/vip_single_cam_object_detection2". Then follow the following steps: 1. Replace "Capture" in chains_vipSingleCameraObjectDetect2Tda3xx.txt to "NullSource" 2. In the chains_vipSingleCameraObjectDetect2Tda3xx.c, you will have to replace all the CapturePrms with NullSrcPrms from chains_networkRxDisplay usecase. This involves adding the function chains_myObjDetect_SetNullSrcPrms instead of ChainsCommon_SingleCam_SetCapturePrms. 3. If you are using NDK_PROC_TO_USE=a15_0 (and not ipu1_0) in <VISION_SDK>/vision_sdk/configs/tda2xx_evm_bios_all/cfg.mk (depending on what board and config you are using) then you need to add that dependency in your usecase folder's cfg.mk Follow the build procedure as mentioned in the vision sdk developer's guide. Hope it helps.
Hi Abhishek, Thank you for the help. I have one query have you tried running the usecase for FCW which is in "C:\VISION_SDK_02_12_00_00\ti_components\algorithms_codecs\REL.200.V.SFM.C66X.00.01.00.00\200.V.SFM.C66X.00.01\modules\ti_forward_collision_warning\test\src" following path through network port. if yes ,can you suggest how to provide video streams as input to the above sample usecase.
Hi Swati, I have not tried to build or run the FCW algo. But I think providing video streams over network would be same as the chains_networkRxDisplay usecase. You will have to use the network_tx.exe on windows provided in Vision sdk to transfer the file. The command and details are mentioned in the Vision sdk developer's guide. Regards, Abhishek