Hand Gesture Controlled Dino Game in Python Using OPENCV

Hello Everyone, We will create a Hand Gesture Controlled Dino Game in Python Using OPENCV in this article. We will use the Mediapipe, OpenCV in Python and we will also use the PyAutoGUI library to control the character.  Before we start, let’s learn the basic definitions related to the project.

Please note that this project is for high school students. The students should have some basic knowledge of python programming.


OpenCV is a huge open-source library for computer vision, machine learning, and image processing. OpenCV supports a wide variety of programming languages like Python, C++, Java, etc. It can process images and videos to identify objects, faces, or even the handwriting of a human.

MediaPipe Library offers open-source cross-platform, customizable ML solutions for live and streaming media.

 PyAutoGUI library lets your Python scripts control the mouse and keyboard to automate interactions with other applications. The API is designed to be as simple. PyAutoGUI works on Windows, macOS, and Linux.

Gesture Controlling a Chrome Dino Game Using OpenCV

Requirements for this project: Make sure you install the following programs on your computer.

  • Install python – Download the latest version for your computer. www.python.org/downloads
  • Install Pycharm  – Download the latest version for your computer www.jetbrains.com/pycharm/download
  • Install MediaPipe, OpenCVand PyAutoGUI Packages by using the pip install library.
  • Run these pip commands in your terminal:
pip install selenium
pip install PyAutoGUI
pip install MediaPipe
Pip install cv2


Open the PyCharm, and then enter the following codes below.

import cv2
import mediapipe as mp
import pyautogui

mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands

# Keeps fingers number
tipIds = [4, 8, 12, 16, 20]

minAngle = 0
maxAngle = 255
angle = 0
angleBar = 400
angleDeg = 0
minHand = 50 # 50
maxHand = 300 # 300

status = “Normal”

def findHands(image, draw=True):
imageRGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = hands.process(imageRGB) # process the frame
# print(results.multi_hand_landmarks)
if results.multi_hand_landmarks:
for handLms in results.multi_hand_landmarks:

if draw:
mpDraw.draw_landmarks(image, handLms,
return image

# Find finger position in image and returns a landmark list
def fingerPosition(image, handNo=0, draw=True):
lmList = []
if results.multi_hand_landmarks:
myHand = results.multi_hand_landmarks[handNo]
for id, lm in enumerate(myHand.landmark):
# print(id,lm)
h, w, c = image.shape
cx, cy = int(lm.x * w), int(lm.y * h)
lmList.append([id, cx, cy])
if draw:
cv2.circle(image, (cx, cy), 5, (255, 0, 0), cv2.FILLED)

return lmList

# Get input from webcam
cap = cv2.VideoCapture(0)
with mp_hands.Hands(
min_tracking_confidence=0.5) as hands:
# Get current frame
while cap.isOpened():
ret, image = cap.read()
# Flip the image horizontally for a later selfie-view display, and convert
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
image.flags.writeable = False
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

# Save fingers position landmarks in a list
lmList = fingerPosition(image)
if len(lmList) != 0:
fingers = []
for id in range(1, 5):
if lmList[tipIds[id]][2] < lmList[tipIds[id] – 2][2]:
if (lmList[tipIds[id]][2] > lmList[tipIds[id] – 2][2]):
totalFingers = fingers.count(1)

# if hand fisted, press space key
if totalFingers == 0:
status = “Jump”
status = “Run ”
cv2.putText(image, status, (40, 100), cv2.FONT_HERSHEY_COMPLEX,
2, (0, 255, 0), 3)

cv2.imshow(“Hand Landmark”, image)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>