Рекомендуемая категория для самостоятельной подготовки:
Дипломная работа*
Код |
559821 |
Дата создания |
2020 |
Страниц |
64
|
Мы сможем обработать ваш заказ (!) 26 декабря в 12:00 [мск] Файлы будут доступны для скачивания только после обработки заказа.
|
Содержание
Table of Contents
ABSTRACT...............................................................................................................
v
Keywords ................................................................................................................
v
Terms and Abbreviations ......................................................................................
vi
1.
INTRODUCTION ...............................................................................................
1
1.1.
Background....................................................................................................
2
1.1.
Optical Flow Estimation ................................................................................
4
1.2.
Vehicle Speed Estimation .............................................................................
8
1.3.
Deep Learning ...............................................................................................
9
1.4.
Problem Statement ......................................................................................
15
1.5.
Relevance and Motivation ...........................................................................
15
1.6.
Research Question .......................................................................................
15
1.7.
Goal .............................................................................................................
15
1.8.
Objective......................................................................................................
15
1.9.
Conclusion ...................................................................................................
16
2.
LITERATURE REVIEW ..................................................................................
17
2.1.
Classical Methods for Determination ..........................................................
17
2.1.1. Lucas-kanade.........................................................................................
17
2.1.2.
Horn and Schunck .................................................................................
18
2.2.
State of the Arts Methods for Determination ..............................................
18
2.2.1.
FlowNet .................................................................................................
18
2.2.2.
FlowNet2.0 ............................................................................................
19
2.2.3.
SpyNet ...................................................................................................
20
2.2.4. Speed Estimation using Optical Flow for Vehicle Tracking ................
22
2.2.5. Using a UAV Platform to Estimate the Speed of Multiple Moving
Objects 23
2.2.6. Using Smartphone Sensors Using For Estimating Vehicle Speed on
Highway Roads ..................................................................................................
23
2.3.
Summary of Methods and Algorithms ........................................................
24
2.4.
Conclusion ...................................................................................................
26
3.
METHODOLOGY ............................................................................................
28
3.1.
Requirements Analysis ................................................................................
28
3.1.1.
Functional Requirements ......................................................................
28
3.1.2.
System Architecture ..............................................................................
29
3.1.3.
Workflow ..............................................................................................
31
3.1.4.
Flowchart ...............................................................................................
31
3.2.
Dataset .........................................................................................................
33
3.3.
Solution Approach in Steps .........................................................................
34
3.4.
Conclusion ...................................................................................................
38
4.
IMPLEMENTATION........................................................................................
39
4.1.
Tools and technologies ................................................................................
39
4.2.
Experiments and Results .............................................................................
40
Test Result 1 ......................................................................................................
40
Test Result 2: After some adjustments ..............................................................
41
Test Result 3. Further adjustments ....................................................................
42
4.3.
Impact of the parameters of the proposed approach ...................................
43
4.4.
Software Program (The Application) ..........................................................
44
4.4.1. How to Setup and Test (Use) The Application .....................................
44
4.5.
Conclusion ...................................................................................................
45
5. EVALUATION, DISCUSSION AND CONCLUSION ...................................
46
5.1.
Evaluation ....................................................................................................
46
5.1.1.
Comparison ...........................................................................................
46
5.2.
Discussion....................................................................................................
47
5.2.1. Interpreting and explaining results. .......................................................
47
5.2.2.
Answering research question. ...............................................................
48
5.2.3.
Justifying approach. ..............................................................................
48
5.2.4.
Critically evaluating study ....................................................................
48
5.3.
Conclusion ...................................................................................................
49
5.4. Future Works 49
6. BIBLIOGRAPHY (REFERENCES) 50
7. APPENDIXES 56
Appendix 1. CNN Model 56
Appendix 2. Optical Flow vs Receptive Field Maps of LPTCs [42] 56
Appendix 3. Image processing; CNN vs Brain [43] 57
Appendix 4. Script for creating ROI. 57
Appendix 5. Project folder Structure. 58
Введение
1. INTRODUCTION
Images are great in all; videos are even more resourceful because there is more information in a video than in an image. Image exposes merely the spatial positioning of the pixels, thus the relative position of each pixel but in a video, there exists a set of individual film frames or video frames, which are typically many still images, that sequentially compose the complete moving pictures to make up the video over time. So, a video contains the same spatial information and an additional temporal component. Meaning that not only the location of a pixel exits but also when this pixel value assumes any particular location. This additional information gives knowledge about the time and duration of the pixel. In other words, information in a video is encoded not only spatially, but also sequentially with respect to time. This amount of information opens a lot of doors on what could be investigated and made processing of videos more interesting.
Optical flow is one of these doors. It is a technique use to track the motion of objects in videos. This technique has a number of different applications including video compression, video stabilization, video description (a more recent application area), object detection and tracking, velocity estimation, and to mention but a few. Optical flow estimation is a per-pixel prediction which means it estimates the displacement of the pixel brightness as they travel across the video frames [9]. Optical flow is a per-pixel prediction which means to estimate how the pixel brightness moves across the screen over time [10],[11].
With the recent advancements and benefits of AI, Deep Neural Networks are becoming a popular approach to solving problems. They enable computers learn features from images and videos to predict certain behaviour. Imagine if athletes run with a camera on their chest, could the speed of the athlete be determined in real time? Given a video from the dashboard camera of a moving car, can the speed of
1
the car be determined? If that of a car would be possible, why can it not work for athletics as well. The relevance of this idea is not only in the above-mentioned applicable areas but also could be used in understanding people flow. In the paper by Hara et al, a method to estimate the flow of pedestrians was proposed and accomplished with the help of a Convolution Neural Network (CNN) [12]. This was estimated from dashboard camera video. Determining the speed of a car can be very challenging but with labelled data and a CNN architecture and with help of optical flow, it might be feasible to estimate the speed. Let us explore optical flow a little further and proceed with a little further insight on vehicle speed estimation.
Determining the speed of a car can be very challenging but with labelled data and a CNN architecture and with help of optical flow, we might be able to estimate the speed.
Фрагмент работы для ознакомления
Добрый день! Уважаемые студенты, Вашему вниманию представляется дипломная работа на тему: «The Program for Optical Flow Estimation Based on Deep Learning Approaches»
Оригинальность работы 90%
Список литературы
6. BIBLIOGRAPHY (REFERENCES)
[1] R. iKlette, iConcise iComputer iVision: iAn iIntroduction iinto iTheory iand
iAlgorithms. iLondon: iSpringer-Verlag, i2014.
[2] D. iH. iBallard iand iC. iM. iBrown, i‘Computer ivision’. iEnglewood iCliffs, iN.J. : iPrentice-Hall, i1982, iAccessed: iMay i14, i2020. i[Online]. iAvailable: ihttps://trove.nla.gov.au/version/44901301.
[3] T. iS. iHuang, i‘Computer iVision: iEvolution iand iPromise’, ip. i5.
[4] M. iSonka, iV. iHlavac, iand iR. iBoyle, i‘Image iunderstanding’, iin iImage
iProcessing, iAnalysis iand iMachine iVision, iM. iSonka, iV. iHlavac, iand iR. iBoyle, iEds.
iBoston, iMA: iSpringer iUS, i1993, ipp. i316–372.
[5] C. iS. iRoyden iand iK. iD. iMoore, i‘Use iof ispeed icues iin ithe idetection iof imoving iobjects iby imoving iobservers’, iVision iRes., ivol. i59, ipp. i17–24, iApr. i2012, idoi: i10.1016/j.visres.2012.02.006.
[6] J. iBrownlee, i‘Understand ithe iImpact iof iLearning iRate ion iNeural iNetwork
iPerformance’, iMachine iLearning iMastery, iJan. i24, i2019.
ihttps://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-
deep-learning-neural-networks/ i(accessed iMay i19, i2020).
[7] F. iD, i‘Batch inormalization iin iNeural iNetworks’, iMedium, iOct. i25, i2017. ihttps://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c i(accessed iMay i19, i2020).
[8] J. iBrownlee, i‘Dropout iRegularization iin iDeep iLearning iModels iWith iKeras’,
iMachine iLearning iMastery, iJun. i19, i2016.
ihttps://machinelearningmastery.com/dropout-regularization-deep-learning-models-
keras/ i(accessed iMay i19, i2020).
50
[9] A. iBurton iand iJ. iRadford, iThinking iin iperspective: icritical iessays iin ithe istudy iof ithought iprocesses. iMethuen, i1978.
[10] B. iK. iP. iHorn iand iB. iG. iSchunck, i‘Determining iOptical iFlow’, ip. i19.
[11] ‘What iis iOptical iFlow iand iwhy idoes iit imatter iin ideep ilearning’.
ihttps://medium.com/swlh/what-is-optical-flow-and-why-does-it-matter-in-deep-learning-b3278bb205b5 i(accessed iMay i15, i2020).
[12] Y. iHara, iA. iUchiyama, iT. iUmedu, iand iT. iHigashino, i‘Sidewalk-level iPeople
iFlow iEstimation iUsing iDashboard iCameras iBased ion iDeep iLearning’, iin i2018
iEleventh iInternational iConference ion iMobile iComputing iand iUbiquitous iNetwork
i(ICMU), iOct. i2018, ipp. i1–6, idoi: i10.23919/ICMU.2018.8653595.
[13] G. iJohansson, i‘Visual iperception iof ibiological imotion iand ia imodel ifor iits ianalysis’, iPercept. iPsychophys., ivol. i14, ino. i2, ipp. i201–211, iJun. i1973, idoi: i10.3758/BF03212378.
[14] The iEssential iGuide ito iVideo iProcessing. iElsevier, i2009.
[15] H. iWen, iJ. iShi, iY. iZhang, iK.-H. iLu, iJ. iCao, iand iZ. iLiu, i‘Neural iEncoding iand
iDecoding iwith iDeep iLearning ifor iDynamic iNatural iVision’, iCereb. iCortex, ivol. i28, ino. i12, ipp. i4136–4160, iDec. i2018, idoi: i10.1093/cercor/bhx268.
[16] ‘Introduction ito iMotion iEstimation iwith iOptical iFlow’, iAI i& iMachine
iLearning iBlog, iApr. i24, i2019. ihttps://nanonets.com/blog/optical-flow/ i(accessed iMay i15, i2020).
[17] S. iS. iBeauchemin iand iJ. iL. iBarron, i‘The icomputation iof ioptical iflow’, iACM iComput. iSurv. iCSUR, ivol. i27, ino. i3, ipp. i433–466, iSep. i1995, idoi: i10.1145/212094.212141.
51
[18] ‘Optical iFlow i— iOpenCV-Python iTutorials i1 idocumentation’. ihttps://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_video/py_lucas_kanade/py_lucas_ kanade.html i(accessed iMay i19, i2020).
[19] T. iR. iS. iKalyan iand iM. iMalathi, i‘Architectural iimplementation iof ihigh ispeed
ioptical iflow icomputation ibased ion iLucas-Kanade ialgorithm’, iin i2011 i3rd
iInternational iConference ion iElectronics iComputer iTechnology, iApr. i2011, ivol. i4, ipp. i192–195, idoi: i10.1109/ICECTECH.2011.5941885.
[20] Y. iTang, iL. iMa, iand iL. iZhou, i‘Hallucinating iOptical iFlow iFeatures ifor iVideo
iClassification’, iArXiv190511799 iCs, iJun. i2019, iAccessed: iMay i14, i2020. i[Online].
iAvailable: ihttp://arxiv.org/abs/1905.11799.
[21] commaai/speedchallenge. icomma.ai, i2020.
[22] S. iRussell iand iP. iNorvig, iArtificial iIntelligence: iA iModern iApproach, i3rd ied.
iUSA: iPrentice iHall iPress, i2009.
[23] A. iM. iTuring, i‘Computing iMachinery iand iIntelligence’, iin iParsing ithe iTuring iTest: iPhilosophical iand iMethodological iIssues iin ithe iQuest ifor ithe iThinking iComputer, iR. iEpstein, iG. iRoberts, iand iG. iBeber, iEds. iDordrecht: iSpringer iNetherlands, i2009, ipp. i23–65.
[24] G. iE. iMoore, i‘Cramming iMore iComponents iOnto iIntegrated iCircuits’, iProc.
iIEEE, ivol. i86, ino. i1, ipp. i82–85, iJan. i1998, idoi: i10.1109/JPROC.1998.658762.
[25] J. iDean, i‘1.1 iThe iDeep iLearning iRevolution iand iIts iImplications ifor iComputer iArchitecture iand iChip iDesign’, iin i2020 iIEEE iInternational iSolid- iState iCircuits iConference i- i(ISSCC), iSan iFrancisco, iCA, iUSA, iFeb. i2020, ipp. i8–14, idoi: i10.1109/ISSCC19947.2020.9063049.
52
[26] J. iBrownlee, i‘What iis iDeep iLearning?’, iMachine iLearning iMastery, iAug. i15, i2019. ihttps://machinelearningmastery.com/what-is-deep-learning/ i(accessed iMay i18, i2020).
[27] ‘The iDifference iBetween iAI, iMachine iLearning, iand iDeep iLearning? i|
iNVIDIA iBlog’, iThe iOfficial iNVIDIA iBlog, iJul. i29, i2016.
ihttps://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-
machine-learning-deep-learning-ai/ i(accessed iMay i18, i2020).
[28] S. iSaha, i‘A iComprehensive iGuide ito iConvolutional iNeural iNetworks i— ithe iELI5 iway’, iMedium, iDec. i17, i2018. ihttps://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 i(accessed iMay i18, i2020).
[29] B. iD. iLucas iand iT. iKanade, i‘An iiterative iimage iregistration itechnique iwith ian
iapplication ito istereo ivision’, iin iProceedings iof ithe i7th iinternational ijoint iconference ion iArtificial iintelligence i- iVolume i2, iVancouver, iBC, iCanada, iAug. i1981, ipp. i674– 679, iAccessed: iMay i14, i2020. i[Online].
[30] H. iWang iand iB. iRaj, i‘On ithe iOrigin iof iDeep iLearning’, iArXiv170207800 iCs iStat, iMar. i2017, iAccessed: iMay i18, i2020. i[Online]. iAvailable: ihttp://arxiv.org/abs/1702.07800.
[31] P. iFischer iet ial., i‘FlowNet: iLearning iOptical iFlow iwith iConvolutional
iNetworks’, iArXiv150406852 iCs, iMay i2015, iAccessed: iMay i14, i2020. i[Online].
iAvailable: ihttp://arxiv.org/abs/1504.06852.
[32] E. iIlg, iN. iMayer, iT. iSaikia, iM. iKeuper, iA. iDosovitskiy, iand iT. iBrox, i‘FlowNet i2.0: iEvolution iof iOptical iFlow iEstimation iwith iDeep iNetworks’, iArXiv161201925 iCs, iDec. i2016, iAccessed: iMay i14, i2020. i[Online]. iAvailable: ihttp://arxiv.org/abs/1612.01925.
53
[33] A. iRanjan iand iM. iJ. iBlack, i‘Optical iFlow iEstimation iusing ia iSpatial iPyramid iNetwork’, iArXiv161100850 iCs, iNov. i2016, iAccessed: iMay i14, i2020. i[Online]. iAvailable: ihttp://arxiv.org/abs/1611.00850.
[34] S. iP. iIndu, iM. iGupta, iand iA. iBhattacharyya, i‘Vehicle iTracking iand iSpeed
iEstimation iusing iOptical iFlow iMethod’, i2011.
ihttps://www.semanticscholar.org/paper/Vehicle-Tracking-and-Speed-Estimation-
using-Optical-Indu-Gupta/23b20e1df30d2b8cd086cd735431f7ebe05e7c5e
i(accessed iMay i14, i2020).
[35] D. iBiswas, iH. iSu, iC. iWang, iand iA. iStevanovic, i‘Speed iEstimation iof iMultiple
iMoving iObjects ifrom ia iMoving iUAV iPlatform’, iISPRS iInt. iJ. iGeo-Inf., ivol. i8, ino. i6, ip. i259, iJun. i2019, idoi: i10.3390/ijgi8060259.
[36] N. iE. iA. iAbdelgawad, iA. iEl iMahdy, iW. iGomaa, iand iA. iShoukry, i‘Estimating
iVehicle iSpeed ion iHighway iRoads ifrom iSmartphone iSensors iUsing iDeep iLearning
iModels’, iin i2019 iIEEE i31st iInternational iConference ion iTools iwith iArtificial
iIntelligence i(ICTAI), iNov. i2019, ipp. i979–986, idoi: i10.1109/ICTAI.2019.00138.
[37] A. iPant, i‘Workflow iof ia iMachine iLearning iProject’, iMedium, iJan. i23, i2019. ihttps://towardsdatascience.com/workflow-of-a-machine-learning-project-ec1dba419b94 i(accessed iMay i19, i2020).
[38] D. iScharstein, i‘Some iutilities ifor ireading, iwriting, iand icolor-coding i.flo iimages’, iFeb. i07, i2007. ihttp://vision.middlebury.edu/flow/code/flow-code/README.txt i(accessed iMay i15, i2020).
[39] M. iBojarski iet ial., i‘End ito iEnd iLearning ifor iSelf-Driving iCars’,
iArXiv160407316 iCs, iApr. i2016, iAccessed: iMay i14, i2020. i[Online]. iAvailable:
ihttp://arxiv.org/abs/1604.07316.
54
[40] J. iMitchell, i‘Autonomous iVehicle iSpeed iEstimation ifrom idashboard icam’, iMedium, iJul. i07, i2017. ihttps://chatbotslife.com/autonomous-vehicle-speed-estimation-from-dashboard-cam-ca96c24120e4 i(accessed iMay i15, i2020).
[41] J. iSardinha, i‘Predicting ivehicle ispeed ifrom idash icam ivideo’, iMedium, iAug. i30, i2017. ihttps://medium.com/weightsandbiases/predicting-vehicle-speed-from-dashcam-video-f6158054f6fd i(accessed iMay i19, i2020).
[42] S. iJ. iHuston iand iH. iG. iKrapp, i‘Visuomotor iTransformation iin ithe iFly iGaze iStabilization iSystem’, iPLOS iBiol., ivol. i6, ino. i7, ip. ie173, iJul. i2008, idoi: i10.1371/journal.pbio.0060173.
[43] ‘Deep iConvolutional iNeural iNetworks ias iModels iof ithe iVisual iSystem: iQ&A’, iNeurdiness, iMay i17, i2018. ihttps://neurdiness.wordpress.com/2018/05/17/deep-convolutional-neural-networks-as-models-of-the-visual-system-qa/ i(accessed iMay i18, i2020).
Пожалуйста, внимательно изучайте содержание и фрагменты работы. Деньги за приобретённые готовые работы по причине несоответствия данной работы вашим требованиям или её уникальности не возвращаются.
* Категория работы носит оценочный характер в соответствии с качественными и количественными параметрами предоставляемого материала. Данный материал ни целиком, ни любая из его частей не является готовым научным трудом, выпускной квалификационной работой, научным докладом или иной работой, предусмотренной государственной системой научной аттестации или необходимой для прохождения промежуточной или итоговой аттестации. Данный материал представляет собой субъективный результат обработки, структурирования и форматирования собранной его автором информации и предназначен, прежде всего, для использования в качестве источника для самостоятельной подготовки работы указанной тематики.
bmt: 0.00435