You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*TODO:* Explain the setup procedures to run your project. For instance, this can include your project directory structure, the models you need to download and where to place them etc. Also include details about how to install the dependencies your project requires.
11
+
In this project, I used an Intel® OpenVINO [Gaze Detection model](https://docs.openvinotoolkit.org/latest/_models_intel_gaze_estimation_adas_0002_description_gaze_estimation_adas_0002.html) to control the mouse pointer of my computer. Using the Gaze Estimation model to estimate the gaze of the user's eyes and change the mouse pointer position accordingly. This project demonstrates the ability of running multiple models in the same machine and coordinate the flow of data between those models.
12
+
13
+
## How It Works
14
+
Used the InferenceEngine API from Intel's OpenVino ToolKit to build the project.
15
+
16
+
The gaze estimation model used requires three inputs:
17
+
18
+
- The head pose
19
+
- The left eye image
20
+
- The right eye image.
21
+
22
+
To get these inputs, use the three other OpenVino models model below:
Coordinate the flow of data from the input, and then amongst the different models and finally to the mouse controller. The flow of data looks like this:
*TODO:* Include any documentation that users might need to better understand your project code. For instance, this is a good place to explain the command line arguments that your project supports.
We can use the [Deployment Manager](https://docs.openvinotoolkit.org/latest/_docs_install_guides_deployment_manager_tool.html) present in OpenVINO to create a runtime package from our application. These packages can be easily sent to other hardware devices to be deployed.
183
+
184
+
To deploy the application to various devices usinf the Deployment Manager run the steps below.
*TODO:* Include the benchmark results of running your model on multiple hardwares and multiple model precisions. Your benchmarks can include: model loading time, input/output processing time, model inference time etc.
@@ -23,5 +211,19 @@ This is where you can provide information about the stand out suggestions that y
23
211
### Async Inference
24
212
If you have used Async Inference in your code, benchmark the results and explain its effects on power and performance of your project.
25
213
26
-
### Edge Cases
27
-
There will be certain situations that will break your inference flow. For instance, lighting changes or multiple people in the frame. Explain some of the edge cases you encountered in your project and how you solved them to make your project more robust.
214
+
## Edge Cases
215
+
- Multiple People Scenario: If we encounter multiple people in the video frame, it will always use and give results one face even though multiple people detected,
216
+
- No Head Detection: it will skip the frame and inform the user
217
+
218
+
## Area of Improvement:
219
+
- [Intel® VTune™ Profiler](https://software.intel.com/content/www/us/en/develop/tools/vtune-profiler/choose-download.html): Profile my application and locate any bottlenecks.
220
+
- Gaze estimations: We could revisit the logic of detemining and calculating the coordinates as it is a bit flaky.
221
+
- lighting condition: We might use HSV based pre-processing steps to minimize error due to different lighting conditions.
222
+
223
+
224
+
## Reference
225
+
226
+
- [OpenCV Face Recognition](https://www.pyimagesearch.com/2018/09/24/opencv-face-recognition/)
227
+
- [Tracking your eyes with Python](https://medium.com/@stepanfilonov/tracking-your-eyes-with-python-3952e66194a6)
228
+
- [Real-time eye tracking using OpenCV and Dlib](https://towardsdatascience.com/real-time-eye-tracking-using-opencv-and-dlib-b504ca724ac6)
229
+
- [Deep Head Pose](https://github.com/natanielruiz/deep-head-pose/blob/master/code/utils.py#L86+L117)
0 commit comments