Hand Gesture Controlled Robot using Arduino & Python & OpenCV

Hello Everyone,

We will create a Hand Gesture Controlled Robot in Python Using OPENCV in this article. We will use the Mediapipe, OpenCV in Python and we will also use the PyAutoGUI library to control the character.  Before we start, let’s learn the basic definitions related to the project. I

Please note that this project is for high school students. The students should have some basic knowledge of python programming.

LEARN – PRACTICE – SHARE

OpenCV is a huge open-source library for computer vision, machine learning, and image processing. OpenCV supports a wide variety of programming languages like Python, C++, Java, etc. It can process images and videos to identify objects, faces, or even the handwriting of a human.

MediaPipe Library offers open-source cross-platform, customizable ML solutions for live and streaming media.

 PyAutoGUI library lets your Python scripts control the mouse and keyboard to automate interactions with other applications. The API is designed to be as simple. PyAutoGUI works on Windows, macOS, and Linux.


Gesture Controlling a Robot Using OpenCV

Requirements for this project: Make sure you install the following programs on your computer.

  • Install python – Download the latest version for your computer. www.python.org/downloads
  • Install Pycharm  – Download the latest version for your computer www.jetbrains.com/pycharm/download
  • Install MediaPipe, OpenCVand PyAutoGUI Packages by using the pip install library.
  • Run these pip commands in your terminal:
pip install selenium
pip install PyAutoGUI
pip install MediaPipe
Pip install cv2

HOW THE CODE WORKS:

Part 1 – Arduino: Upload the Standart Firmate code.

Follow the instructions to see the code : File / Examples / Firmata / StandartFirmata

Open the PyCharm, and then enter the following codes below.

Part 2 –  Control Arduino with Python.  

#Code
from pyfirmata import Arduino, SERVO
port='COM17'
pin=3
board=Arduino(port)

board.digital[pin].mode=SERVO

def rotateServo(pin, angle):
        board.digital[pin].write(angle)

def robotb(val):
    if val==0:
        rotateServo(pin, 180)
    elif val==1:
        rotateServo(pin, 40)

Part 3- Control Robot  with OpenCV 

Open the PyCharm, and then enter the following codes below.

#FUll CODE

import cv2
import mediapipe as mp
import pyautogui
from pyfirmata import Arduino, SERVO
from y55 import robotb

mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands

tipIds = [4, 8, 12, 16, 20]

minAngle = 0
maxAngle = 255
angle = 0
angleBar = 400
angleDeg = 0
minHand = 50  # 50
maxHand = 300  # 300

status = "Normal"


def findHands(image, draw=True):
    imageRGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    results = hands.process(imageRGB)  # process the frame
    # print(results.multi_hand_landmarks)
    if results.multi_hand_landmarks:
        for handLms in results.multi_hand_landmarks:

            if draw:
                mpDraw.draw_landmarks(image, handLms,
                                      mpHands.HAND_CONNECTIONS)
    return image


# Find finger position in image and returns a landmark list
def fingerPosition(image, handNo=0, draw=True):
    lmList = []
    if results.multi_hand_landmarks:
        myHand = results.multi_hand_landmarks[handNo]
        for id, lm in enumerate(myHand.landmark):
            # print(id,lm)
            h, w, c = image.shape
            cx, cy = int(lm.x * w), int(lm.y * h)
            lmList.append([id, cx, cy])
            if draw:
                cv2.circle(image, (cx, cy), 5, (255, 0, 0), cv2.FILLED)

    return lmList


# Get input from webcam
cap = cv2.VideoCapture(0)
with mp_hands.Hands(
        min_detection_confidence=0.8,
        min_tracking_confidence=0.5) as hands:
    # Get current frame
    while cap.isOpened():
        ret, image = cap.read()
        # Flip the image horizontally for a later selfie-view display, and convert
        # the BGR image to RGB.
        image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
        image.flags.writeable = False
        results = hands.process(image)
        # Draw the hand annotations on the image.
        image.flags.writeable = True
        image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

        # Save fingers position landmarks in a list
        lmList = fingerPosition(image)
        if len(lmList) != 0:



            fingers = []
            for id in range(1, 5):
                if lmList[tipIds[id]][2] < lmList[tipIds[id] - 2][2]:
                    fingers.append(1)
                if (lmList[tipIds[id]][2] > lmList[tipIds[id] - 2][2]):
                    fingers.append(0)
            totalFingers = fingers.count(1)



            # if hand fisted, press space key
            if totalFingers == 0:

               # pyautogui.press('space')
                status = "ON"
                robotb(0)
            else:
                status = "OFF "
                robotb(1)
            cv2.putText(image, status, (40, 100), cv2.FONT_HERSHEY_COMPLEX,
                        2, (0, 255, 0), 3)

        cv2.imshow("Hand Landmark", image)
        cv2.waitKey(1)
    cv2.destroyAllWindows()

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Top