JetBot手势识别实验

发布于:2024-04-26 ⋅ 阅读:(24) ⋅ 点赞:(0)

实验简介

本实验目的在JetBot智能小车实现手势识别功能,使用板卡为Jetson Nano。通过小车摄像头,识别五个不同的手势,实现小车的运动及灯光控制。

在这里插入图片描述

1.数据采集

连接小车板卡的Jupyterlab环境,运行以下代码块,配置数据采集环境,主要采集5个手势动作,每个动作采集100-200张图像。
在这里插入图片描述

import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpeg

camera = Camera.instance(width=224, height=224)

image = widgets.Image(format='jpeg', width=224, height=224)  # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)

import os

stop_dir = 'dataset/stop'

# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
    os.makedirs(stop_dir)
except FileExistsError:
    print('Directories not created becasue they already exist')
button_layout = widgets.Layout(width='128px', height='64px')
stop_button = widgets.Button(description='add stop', button_style='success', layout=button_layout)
stop_count = widgets.IntText(layout=button_layout, value=len(os.listdir(stop_dir)))

display(widgets.HBox([stop_count, stop_button]))
from uuid import uuid1

def save_snapshot(directory):
    image_path = os.path.join(directory, str(uuid1()) + '.jpg')
    with open(image_path, 'wb') as f:
        f.write(image.value)

def save_stop():
    global stop_dir, stop_count
    save_snapshot(stop_dir)
    stop_count.value = len(os.listdir(stop_dir))
    
    
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
stop_button.on_click(lambda x: save_stop())
display(image)
display(widgets.HBox([stop_count, stop_button]))

2.数据集制作

数据集是通过mediapipe识别手部关键点,制作数据集。mediapipe主要是识别21个手部关键点,各关键点的顺序如下图所示。

在这里插入图片描述
下面是mediapipe的一个使用示例:

import cv2
import mediapipe as mp

# 初始化MediaPipe Hands模块
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=True,
                       max_num_hands=2,
                       min_detection_confidence=0.5,
                       min_tracking_confidence=0.5)

mp_drawing = mp.solutions.drawing_utils  # 用于绘制关键点的工具

# 读取图片
image_path = 'dataset/0b61e88c-02ad-11ef-9c74-28dfeb422309.jpg'  # 这里替换为你的图片路径
image = cv2.imread(image_path)

if image is None:
    print("Cannot find the image.")
else:
    # 将图像从BGR转换为RGB
    image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # 处理图像,检测手部
    results = hands.process(image_rgb)

    # 将图像从RGB转回BGR以显示
    image_bgr = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2BGR)

    # 绘制手部关键点
    if results.multi_hand_landmarks:
        for hand_landmarks in results.multi_hand_landmarks:
            mp_drawing.draw_landmarks(image_bgr, hand_landmarks, mp_hands.HAND_CONNECTIONS)


    # 显示图像
    cv2.imshow('Hand Detection', image_bgr)
    cv2.waitKey(0)  # 等待按键
    cv2.destroyAllWindows()

    # # 可选:保存输出图像
    # output_image_path = 'path_to_your_output_image.jpg'  # 输出文件的路径
    # cv2.imwrite(output_image_path, image_bgr)
    # print("Output image is saved as", output_image_path)

# 释放资源
hands.close()

核心代码是mediapipe.solutions.hands.Hands这个函数,该函数的有以下一些参数,具体含义如下:

1.static_image_mode:
类型: bool
默认值: False
当设置为 True 时,手部检测器每次都会在调用时对图像进行检测,适合于处理静态图像。当设置为 False 时,检测器会自动在第一帧进行检测,后续帧则主要进行跟踪,优化了处理视频流的性能和效率。
2.max_num_hands:
类型: int
默认值: 2
定义了检测器同时检测的最大手的数量。可以根据应用需求调整,例如在一个场景中可能存在更多的手。
3.min_detection_confidence:
类型: float
默认值: 0.5
这个阈值用来控制检测的置信度。仅当检测到的手部的置信度高于此值时,检测结果才会被认为有效。值范围是0到1,增大这个值可以减少错误检测,但可能会错过一些正确的检测。
4.min_tracking_confidence:
类型: float
默认值: 0.5
在非静态模式下使用,这个阈值用来控制跟踪的置信度。当跟踪到的手部的置信度低于此值时,检测器会在下一帧重新进行检测,而不是继续跟踪。同样,值范围是0到1。
5.model_complexity:
类型: int
默认值: 1
控制手部关键点检测模型的复杂度。可选值为0、1或2。模型复杂度越高,精度可能越高,但需要的计算资源也越多,延迟可能也会增加。
6.smooth_landmarks:
类型: bool
默认值: True
是否应用滤波处理到检测到的关键点。开启此功能可以获得更平滑的关键点移动效果,尤其是在视频流处理时。
7.enable_segmentation:
类型: bool
默认值: False
是否开启手部区域的分割。当此选项开启时,除了返回关键点外,还会返回手部的分割掩模,可以用于进一步的图像处理或视觉效果增强。
8.smooth_segmentation:
类型: bool
默认值: True
当启用手部分割时,此参数控制是否应用平滑处理到手部分割掩模上。这有助于减少分割掩模的抖动。

可以看到,它对手掌和拳头的关键点均能较好的捕捉到。

在这里插入图片描述

在这里插入图片描述
对于每个点,包含以下三个值:

landmark {
  x: 0.5458590388298035
  y: 0.37459805607795715
  z: -0.05105478689074516
}

具体含义如下:

x:
X坐标表示关键点在图像水平方向的位置。该值是归一化的,范围从 0 到 1,其中 0 代表图像的最左边界,1 代表图像的最右边界。
y:
Y坐标表示关键点在图像垂直方向的位置。这个值也是归一化的,范围从 0 到 1,其中 0 代表图像的顶部边界,1 代表图像的底部边界。
z:
Z坐标代表关键点相对于摄像头平面的深度。该值是归一化的,并且相对于检测框的中心点。Z坐标的单位不是物理单位,而是一个相对比例,可以用来比较同一手中不同关键点的深度(即哪些关键点更靠近或更远离摄像头)。Z坐标的具体数值大小没有绝对的距离意义,主要用于相对深度的比较。

通过运行以下代码,制作数据集,数据集训练/验证划分比例默认为8/2:

import cv2
import mediapipe as mp
import os


def data_calculate(folder_path, class_name):
    mp_hands = mp.solutions.hands
    hands = mp_hands.Hands(static_image_mode=True,
                           max_num_hands=2,
                           min_detection_confidence=0.5,
                           min_tracking_confidence=0.5)
    fail_img = []
    for img_name in os.listdir(folder_path):
        img = cv2.imread(folder_path + '/' + img_name)
        # Flip Horizontal
        img = cv2.flip(img, 1)
        # BGR to RGB
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        try:
            results = hands.process(img)
            with open(f'data.csv', 'a') as f:
                for i in results.multi_hand_landmarks[0].landmark:
                    # print(i.x, i.y, i.z)
                    f.write(f'{i.x},{i.y},{i.z},')
                f.write(class_name)
                f.write('\n')
        except:
            fail_img.append(img_name)
    for i in fail_img:
        print(f"Can not extract image {i}")
    print(len(fail_img))


data_calculate(folder_path='dataset/stop_img', class_name="0")
data_calculate(folder_path='dataset/forward_img', class_name="1")
data_calculate(folder_path='dataset/backward_img', class_name="2")
data_calculate(folder_path='dataset/left_img', class_name="3")
data_calculate(folder_path='dataset/right_img', class_name="4")

其中一些小细节需要注意,比如在数据集转格式时,使用cv2.flip(img, 1)对原始图像进行水平翻转,主要是因为采集数据时,人物正对的小车,而控制小车左右的方向和采集数据刚好相反,因此需要对图像进行水平翻转。

数据集最终转换成data.csv,每一行有64个值,前63列表示每一个节点的x,y,z数值,最后一列表示类别。

3.模型选择和训练

模型选择2018年这篇论文《Deep Learning for Hand Gesture Recognition on Skeletal Data》提出的手势识别模型。
论文地址:https://ieeexplore.ieee.org/document/8373818
论文代码:https://github.com/eddieai/Gexpress/tree/master

该模型的框图如下图所示:
在这里插入图片描述
算法的核心思想是使用三个特征提取分支,第一个分支是使用3个级联的7x7尺寸的卷积核进行特征提取,第二个分支是使用3个级联的3x3尺寸的卷积核进行特征提取,第三个分支是使用三个1维的平均池化层提取特征,最终使用一个线性层,将三个分支的结果进行concat,得到最终分类的类别。所有卷积层和线性层的参数使用Xavier进行参数初始化。

模型和训练代码如下:

import itertools
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
import pandas as pd
import random
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt


def load_data(filename):
    readbook = pd.read_csv(f'{filename}.csv')
    nplist = readbook.T.to_numpy()
    data = nplist[0:-1].T
    data = np.float64(data)
    target = nplist[-1]
    return data, target


def random_number(data_size, key):
    number_set = []
    for i in range(data_size):
        number_set.append(i)
    if key == 1:
        random.shuffle(number_set)
    return number_set

def split_dataset(data_set, target_set, rate, ifsuf):
    train_size = int((1 - rate) * len(data_set))  # 计算训练集的数据个数
    data_index = random_number(len(data_set), ifsuf)
    x_train = data_set[data_index[:train_size]]
    x_test = data_set[data_index[train_size:]]
    y_train = target_set[data_index[:train_size]]
    y_test = target_set[data_index[train_size:]]
    return x_train, x_test, y_train, y_test


def inputtotensor(inputtensor, labeltensor):  # 将数据集的输入和标签转为tensor格式
    inputtensor = np.array(inputtensor)
    inputtensor = torch.FloatTensor(inputtensor)

    labeltensor = np.array(labeltensor)
    labeltensor = labeltensor.astype(float)
    labeltensor = torch.LongTensor(labeltensor)

    return inputtensor, labeltensor


def addbatch(data_train, data_test, batchsize):
    data = TensorDataset(data_train, data_test)
    data_loader = DataLoader(data, batch_size=batchsize, shuffle=False)

    return data_loader


# 定义神经网络模型
class Net(nn.Module):
    def __init__(self, n_channels=63, n_classes=5, dropout_probability=0.2):
        super(Net, self).__init__()
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.dropout_probability = dropout_probability
        self.all_conv_high = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.Dropout(p=self.dropout_probability),
            torch.nn.AvgPool1d(2)
        )])
        self.all_conv_low = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.Dropout(p=self.dropout_probability),
            torch.nn.AvgPool1d(2)
        )])
        self.all_residual = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.AvgPool1d(2),
            torch.nn.AvgPool1d(2),
            torch.nn.AvgPool1d(2)
        )])

        self.fc = torch.nn.Sequential(
            torch.nn.Linear(in_features=9 * 7, out_features=512),
            torch.nn.ReLU(),
            torch.nn.Linear(in_features=512, out_features=n_classes)
        )
        for module in itertools.chain(self.all_conv_high, self.all_conv_low, self.all_residual):
            for layer in module:
                if layer.__class__.__name__ == "Conv1d":
                    torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))
                    torch.nn.init.constant_(layer.bias, 0.1)

        for layer in self.fc:
            if layer.__class__.__name__ == "Linear":
                torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))
                torch.nn.init.constant_(layer.bias, 0.1)

    def forward(self, input):

        input = input.unsqueeze(1)

        high = self.all_conv_high[0](input)
        low = self.all_conv_low[0](input)
        ap_residual = self.all_residual[0](input)

        # Time convolutions are concatenated along the feature maps axis
        output = torch.cat([
            high,
            low,
            ap_residual
        ], dim=1)
        N, C, F = output.size()
        output = self.fc(output.view(N, C * F))

        return output


def train_test(traininput, trainlabel, testinput, testlabel, batchsize):
    traindata = addbatch(traininput, trainlabel, batchsize)  # shuffle打乱数据集
    maxacc = 0
    start = time.time()
    for epoch in range(101):
        for step, data in enumerate(traindata):
            net.train()
            inputs, labels = data
            # 前向传播
            out = net(inputs)
            # 计算损失函数
            loss = loss_func(out, labels)
            # 清空上一轮的梯度
            optimizer.zero_grad()
            # 反向传播
            loss.backward()
            # 参数更新
            optimizer.step()
        # 测试准确率
        net.eval()
        testout = net(testinput)
        testloss = loss_func(testout, testlabel)
        prediction = torch.max(testout, 1)[1]  # torch.max
        pred_y = prediction.numpy()
        target_y = testlabel.data.numpy()
        j = 0
        for i in range(pred_y.size):
            if pred_y[i] == target_y[i]:
                j += 1
        acc = j / pred_y.size
        if epoch % 10 == 0:
            print("训练次数为", epoch, "的准确率为:", acc)
        if acc > maxacc:
            torch.save(net.state_dict(), "model.pt", _use_new_zipfile_serialization=False)
            print('save ' + str(acc))
            maxacc = acc

    end = time.time()
    print(end - start)


if __name__ == "__main__":
    feature, label = load_data('data')
    split = 0.2  # 测试集占数据集整体的多少
    ifshuffle = 1  # 1为打乱数据集,0为不打乱
    x_train, x_test, y_train, y_test = split_dataset(feature, label, split, ifshuffle)
    traininput, trainlabel = inputtotensor(x_train, y_train)
    testinput, testlabel = inputtotensor(x_test, y_test)
    traininput = nn.functional.normalize(traininput)
    testinput = nn.functional.normalize(testinput)
    LR = 0.001
    batchsize = 2
    net = Net()
    optimizer = torch.optim.Adam(net.parameters(), LR)
    loss_func = torch.nn.CrossEntropyLoss()
    train_test(traininput, trainlabel, testinput, testlabel, batchsize)
    input, label = inputtotensor(feature, label)
    input = nn.functional.normalize(input)
    model = Net()
    model.eval()
    model.load_state_dict(torch.load("model.pt"))
    output = model(input)
    pred = torch.max(output, 1)[1]
    C = confusion_matrix(label, pred, labels=[0, 1, 2, 3, 4])
    plt.matshow(C, cmap=plt.cm.Reds)
    for i in range(len(C)):
        for j in range(len(C)):
            plt.annotate(C[j, i], xy=(i, j), horizontalalignment='center', verticalalignment='center')
    plt.ylabel('True label')
    plt.xlabel('Predicted label')
    plt.show()

训练完成后,保存val上最优的结果模型,命名为model.pt

4.模型部署与运动灯光控制

model.pt上传到小车中,运行如下代码进行部署:

import traitlets
from IPython.display import display
import ipywidgets.widgets as widgets
from jetbot import Camera, bgr8_to_jpeg

#camera = Camera.instance(width=224, height=224)
camera = Camera.instance(width=224, height=224, fps=20)
image = widgets.Image(format='jpg', width=224, height=224)

camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)

display(widgets.HBox([image]))
import cv2
import mediapipe as mp
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import itertools

def data_calculate(image):
    
    mp_hands = mp.solutions.hands
    hands = mp_hands.Hands(static_image_mode=True,
                           max_num_hands=2,
                           min_detection_confidence=0.5,
                           min_tracking_confidence=0.5)
    img = cv2.flip(image, 1)
    # BGR to RGB
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    input = []
    results = hands.process(img)
   
    for i in results.multi_hand_landmarks[0].landmark:
        input.extend([i.x, i.y, i.z])
    
    return input


def inputtotensor(inputtensor):
    inputtensor = np.array(inputtensor)
    inputtensor = torch.FloatTensor(inputtensor)

    return inputtensor


class Net(nn.Module):
    def __init__(self, n_channels=63, n_classes=5, dropout_probability=0.2):
        super(Net, self).__init__()
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.dropout_probability = dropout_probability
        self.all_conv_high = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=7, padding=3),
            torch.nn.ReLU(),
            torch.nn.Dropout(p=self.dropout_probability),
            torch.nn.AvgPool1d(2)
        )])
        self.all_conv_low = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.Conv1d(in_channels=1, out_channels=8, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=8, out_channels=4, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.AvgPool1d(2),

            torch.nn.Conv1d(in_channels=4, out_channels=4, kernel_size=3, padding=1),
            torch.nn.ReLU(),
            torch.nn.Dropout(p=self.dropout_probability),
            torch.nn.AvgPool1d(2)
        )])
        self.all_residual = torch.nn.ModuleList([torch.nn.Sequential(
            torch.nn.AvgPool1d(2),
            torch.nn.AvgPool1d(2),
            torch.nn.AvgPool1d(2)
        )])

        self.fc = torch.nn.Sequential(
            torch.nn.Linear(in_features=9 * 7, out_features=512),
            torch.nn.ReLU(),
            torch.nn.Linear(in_features=512, out_features=n_classes)
        )
        for module in itertools.chain(self.all_conv_high, self.all_conv_low, self.all_residual):
            for layer in module:
                if layer.__class__.__name__ == "Conv1d":
                    torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))
                    torch.nn.init.constant_(layer.bias, 0.1)

        for layer in self.fc:
            if layer.__class__.__name__ == "Linear":
                torch.nn.init.xavier_uniform_(layer.weight, gain=torch.nn.init.calculate_gain('relu'))
                torch.nn.init.constant_(layer.bias, 0.1)

    def forward(self, input):

        input = input.unsqueeze(1)

        high = self.all_conv_high[0](input)
        low = self.all_conv_low[0](input)
        ap_residual = self.all_residual[0](input)

        # Time convolutions are concatenated along the feature maps axis
        output = torch.cat([
            high,
            low,
            ap_residual
        ], dim=1)
        N, C, F = output.size()
        output = self.fc(output.view(N, C * F))

        return output   

def preprocess (x): 
    x = data_calculate(x)
    x = inputtotensor(x)
    x = x.view(1,63)
    x = nn.functional.normalize(x)
    return x

model = Net()                             
model.eval()
model.load_state_dict(torch.load("model.pt"))

下面根据分类结果来执行相应的运动和灯光控制,由于板卡性能有限,为了减少推理时的卡顿,需要先断开摄像头的实时显示推流:

camera_link.unlink() 

对电机,RGB等器件进行初始化:

from jetbot import Robot
robot = Robot()
from RGB_Lib import Programing_RGB
RGB = Programing_RGB()
import RPi.GPIO as GPIO
BEEP_pin = 6 
GPIO.setmode(GPIO.BCM)
# set pin as an output pin with optional initial state of HIGH
GPIO.setup(BEEP_pin, GPIO.OUT, initial=GPIO.LOW)

import torch.nn.functional as F
import time

def update(change):
    global stop_slider, forward_slider,backward_slider,left_slider,right_slider,robot
    t1 = time.time()
    x = change['new'] 
    try:
        x = preprocess(x)
        output = model(x)
        y = torch.max(output, 1)[1]
        print(y)
        if y == 0: 
            robot.stop()
            GPIO.output(BEEP_pin, GPIO.LOW)
            RGB.Set_ChameleonLight_RGB()
            RGB.OFF_ALL_RGB()
        if y == 1:
            robot.forward(0.4)
            GPIO.output(BEEP_pin, GPIO.LOW)
            RGB.Set_BreathSColor_RGB(2)
            RGB.Set_BreathSSpeed_RGB(1)
            RGB.Set_BreathSLight_RGB()
        if y == 2: 
            robot.backward(0.4)
            RGB.OFF_ALL_RGB()
            GPIO.output(BEEP_pin, GPIO.LOW)
            RGB.Set_An_RGB(4, 0xFF, 0x00, 0x00)
        if y == 3: 
            robot.left(0.5)
            RGB.OFF_ALL_RGB()
            GPIO.output(BEEP_pin, GPIO.LOW)
            RGB.Set_An_RGB(9, 0xFF, 0x00, 0x00)
        if y == 4: 
            robot.right(0.5)
            GPIO.output(BEEP_pin, GPIO.LOW)
            RGB.Set_All_RGB(0xFF, 0x00, 0x00)   
    except:
        robot.stop()

    time.sleep(0.5)

update({'new': camera.value})  # we call the function once to intialize
camera.observe(update, names='value') 

运动控制主要依赖小车的Robot类,该类已经做好了各种动作的集成,通过robot.stop()robot.forwardrobot.backward等函数控制小车的停止,前进,后退,左转,右转等基础运动。

灯光控制依赖RGB_Lib文件,该文件的内容如下,各函数功能已写在注释中:

import Adafruit_GPIO as GPIO

class Programming_RGB(object):
    # 在类中添加方法,获取一个I2C设备实例
    def get_i2c_device(self, address, i2c, i2c_bus):
        # 如果已提供i2c实例,使用该实例获取设备
        if i2c is not None:
            return i2c.get_i2c_device(address)
        else:
            # 否则,导入Adafruit_GPIO.I2C模块并根据提供的总线号获取设备
            import Adafruit_GPIO.I2C as I2C
            if i2c_bus is None:
                return I2C.get_i2c_device(address)
            else:
                return I2C.get_i2c_device(address, busnum=i2c_bus)
        
    # 初始化方法,设置默认的I2C设备
    def __init__(self):
        # 创建I2C设备实例,使用默认总线1和地址0x1b
        self._device = self.get_i2c_device(0x1b, None, 1)
    
    # 设置RGB的值
    def Set_All_RGB(self, R_Value, G_Value, B_Value):
        # 尝试写入RGB值到I2C设备,捕获并报告任何I2C错误
        try:
            # 全部开启
            self._device.write8(0x00, 0xFF)
            # 分别设置红、绿、蓝色的值
            self._device.write8(0x01, R_Value)
            self._device.write8(0x02, G_Value)
            self._device.write8(0x03, B_Value)
        except:
            print('Set_All_RGB I2C error')
    
    # 关闭所有RGB灯
    def OFF_ALL_RGB(self):
        try:
            # 设置所有RGB值为0,实现关闭灯光
            self.Set_All_RGB(0x00, 0x00, 0x00)
        except:
            print('OFF_ALL_RGB I2C error')
    
    # 设置单个RGB灯
    def Set_An_RGB(self, Position, R_Value, G_Value, B_Value):
        try:
            # 检查位置值是否合法
            if Position <= 0x09:
                # 设置对应位置的灯光颜色
                self._device.write8(0x00, Position)
                self._device.write8(0x01, R_Value)
                self._device.write8(0x02, G_Value)
                self._device.write8(0x03, B_Value)
        except:
            print('Set_An_RGB I2C error')
    
    # 设置瀑布灯效果
    def Set_WaterfallLight_RGB(self):
        try:
            self._device.write8(0x04, 0x00)
        except:
            print('Set_WaterfallLight_RGB I2C error')
    
    # 设置呼吸灯颜色变化效果
    def Set_BreathColor_RGB(self):
        try:
            self._device.write8(0x04, 0x01)
        except:
            print('Set_BreathColor_RGB I2C error')
    
    # 设置变色龙灯效
    def Set_ChameleonLight_RGB(self):
        try:
            self._device.write8(0x04, 0x02)
        except:
            print('Set_ChameleonLight_RGB I2C error')
    
    # 设置呼吸灯的颜色
    def Set_BreathSColor_RGB(self, color):
        # 确保颜色值在0-6中
        try:
            self._device.write8(0x05, color)
        except:
            print('Set_BreathSColor_RGB I2C error')
    
    # 设置呼吸灯的速度
    def Set_BreathSSpeed_RGB(self, speed):
        try:
            self._device.write8(0x06, speed)
        except:
            print('Set_BreathSSpeed_RGB I2C error')
    
    # 设置呼吸灯效果
    def Set_BreathSLight_RGB(self):
        try:
            self._device.write8(0x04, 0x03)
        except:
            print('Set_BreathSLight_RGB I2C error')

通过相关函数,可以设置各种灯光效果,比如全开、全关、单个控制、瀑布灯、呼吸灯、变色龙效果等。

附录:相关环境版本

absl-py                       0.9.0
Adafruit-GPIO                 1.0.4
Adafruit-MotorHAT             1.4.0
Adafruit-PureIO               0.2.3
Adafruit-SSD1306              1.6.2
apt-clone                     0.2.1
apturl                        0.5.2
asn1crypto                    0.24.0
astor                         0.8.1
attrs                         19.3.0
backcall                      0.1.0
beautifulsoup4                4.6.0
bleach                        3.1.4
blinker                       1.4
Brlapi                        0.6.6
certifi                       2018.1.18
chardet                       3.0.4
click                         7.1.1
colorama                      0.3.7
cryptography                  2.1.4
cupshelpers                   1.0
dataclasses                   0.8
decorator                     4.4.2
defer                         1.0.6
defusedxml                    0.6.0
distro-info                   0.18ubuntu0.18.04.1
entrypoints                   0.3
feedparser                    5.2.1
Flask                         1.1.1
funcsigs                      1.0.2
gast                          0.3.3
google-pasta                  0.2.0
graphsurgeon                  0.4.1
grpcio                        1.27.2
h5py                          2.10.0
html5lib                      0.999999999
httplib2                      0.9.2
idna                          2.6
importlib-metadata            1.6.0
ipykernel                     5.2.0
ipython                       7.13.0
ipython-genutils              0.2.0
ipywidgets                    7.5.1
itsdangerous                  1.1.0
jedi                          0.16.0
jetbot                        0.3.0
Jetson.GPIO                   2.0.4
jetson-stats                  2.0.0
Jinja2                        2.11.1
json5                         0.9.4
jsonschema                    3.2.0
jupyter                       1.0.0
jupyter-client                6.1.2
jupyter-console               6.1.0
jupyter-core                  4.6.3
jupyterlab                    2.0.1
jupyterlab-server             1.0.7
Keras-Applications            1.0.8
Keras-Preprocessing           1.1.0
keyring                       10.6.0
keyrings.alt                  3.0
language-selector             0.1
launchpadlib                  1.10.6
lazr.restfulclient            0.13.5
lazr.uri                      1.0.3
louis                         3.5.0
lxml                          4.2.1
macaroonbakery                1.1.3
Mako                          1.0.7
Markdown                      3.2.1
MarkupSafe                    1.0
mediapipe                     0.8
mistune                       0.8.4
mock                          4.0.2
nbconvert                     5.6.1
nbformat                      5.0.4
nodejs                        0.1.1
notebook                      6.0.3
numpy                         1.19.4
oauth                         1.0.1
oauthlib                      2.0.6
optional-django               0.1.0
PAM                           0.4.2
pandocfilters                 1.4.2
parso                         0.6.2
pbr                           5.4.4
pexpect                       4.8.0
pickleshare                   0.7.5
Pillow                        5.2.0
pip                           21.3.1
portpicker                    1.3.1
prometheus-client             0.7.1
prompt-toolkit                3.0.5
protobuf                      3.19.6
psutil                        5.7.0
ptyprocess                    0.6.0
py-cpuinfo                    5.0.0
pycairo                       1.16.2
pycrypto                      2.6.1
pycups                        1.9.73
Pygments                      2.6.1
PyGObject                     3.26.1
PyICU                         1.9.8
PyJWT                         1.5.3
pymacaroons                   0.13.0
PyNaCl                        1.1.2
pyRFC3339                     1.0
pyrsistent                    0.16.0
pyserial                      3.4
python-apt                    1.6.6
python-dateutil               2.6.1
python-debian                 0.1.32
pytz                          2018.3
pyxdg                         0.25
PyYAML                        3.12
pyzmq                         19.0.0
qtconsole                     4.7.2
QtPy                          1.9.0
requests                      2.23.0
requests-unixsocket           0.1.5
SecretStorage                 2.3.1
Send2Trash                    1.5.0
setuptools                    46.1.3
simplejson                    3.13.2
six                           1.14.0
spidev                        3.4
ssh-import-id                 5.7
system-service                0.3
systemd-python                234
tensorboard                   1.13.1
tensorflow-estimator          1.13.0
tensorflow-gpu                1.13.1+nv19.3
tensorrt                      6.0.1.10
termcolor                     1.1.0
terminado                     0.8.3
testpath                      0.4.4
testresources                 2.0.1
torch                         1.0.0a0+18eef1d
torchvision                   0.2.2.post3
tornado                       6.0.4
traitlets                     5.0.0.dev0
ubuntu-drivers-common         0.0.0
ubuntu-pro-client             8001
uff                           0.6.5
unity-scope-calculator        0.1
unity-scope-chromiumbookmarks 0.1
unity-scope-colourlovers      0.1
unity-scope-devhelp           0.1
unity-scope-firefoxbookmarks  0.1
unity-scope-manpages          0.1
unity-scope-openclipart       0.1
unity-scope-texdoc            0.1
unity-scope-tomboy            0.1
unity-scope-virtualbox        0.1
unity-scope-yelp              0.1
unity-scope-zotero            0.1
urllib3                       1.22
wadllib                       1.3.2
wcwidth                       0.1.9
webencodings                  0.5
Werkzeug                      1.0.0
wheel                         0.30.0
widgetsnbextension            3.5.1
wrapt                         1.12.1
xkit                          0.0.0
zipp                          3.1.0
zope.interface                4.3.2