Raspberry Pi 4B and STM32 Collaborative Development for an Intelligent Access Control System

In this captivating project, we ingeniously combine Raspberry Pi 4B and STM32 to collaboratively build an intelligent access control system. The goal is to leverage the strengths of both, enhancing the overall system’s performance and efficiency. Raspberry Pi 4B handles complex, computationally intensive tasks such as image processing and facial recognition, while STM32 focuses on control tasks within the access control system, including door operation and user permission management. It’s worth noting that STM32 features abundant GPIO and hardware interfaces, providing greater flexibility in design. Depending on requirements, you can integrate security features like keypad locks, fingerprint locks, creating a multi-layered identity verification system.

Designed intelligent access control system

Table of Contents

Project Overview

Intelligent Access Control System

An intelligent access control system is a sophisticated solution that centrally manages and secures various access devices through advanced technologies such as biometrics, smart card systems, and remote monitoring. The system supports various control methods, including passwords, facial recognition, iris scanning, fingerprint recognition, making it versatile and applicable in a wide range of scenarios. It not only safeguards personal and property security but also enhances the efficiency of public space management.

There are two deployment categories for intelligent access control systems: local deployment and service provider operation.

  • Local Deployment:

The core functionalities are implemented on local access devices or a local server, where all authentication, recognition, and decision-making processes occur within the device itself or on a server within the local network.

This approach generally incurs lower overall costs as it doesn’t involve cloud service fees. However, performance is more dependent on the hardware capabilities and processing speed of the local devices, making it suitable for smaller-scale deployments.

  • Service Provider Operation:

Target data collected is sent to the cloud via network technology, where the cloud performs identity verification, recognition, and decision-making processes. The results are then sent back to the local device. The cloud infrastructure is typically operated and maintained by a service provider.

This deployment model offers powerful cloud computing capabilities, capable of handling large-scale data and complex algorithms. It enables remote management and monitoring. However, cloud computing and storage resources come with associated costs, making it more advantageous for large-scale public spaces.

principle of intelligent access control system

Facial Recognition and LBP

Facial recognition, a technology based on computer vision and pattern recognition, aims to automatically detect, identify, and verify faces in images or videos. The core principle involves capturing features of the target and comparing these features with pre-stored facial templates in the system to confirm individual identities.

LBP, or Local Binary Pattern, is a feature descriptor used for texture analysis, extensively applied in areas such as facial recognition and image classification. The fundamental idea involves analyzing each pixel in an image by comparing its grayscale value with neighboring pixels, generating a local binary pattern (0 or 1). This operation is conducted across the entire image, creating a feature map to describe the texture features of the image.

LBP principle

Project Design Principle

The core objective of the project is to implement access control system switching through facial recognition. Here is an overview of the system design and implementation process:

Facial Recognition Process:

  • Combine Raspberry Pi 4B with a camera to capture real-time video streams.
  • Utilize the LBP algorithm for facial recognition, extracting local binary pattern features of the detected faces.
  • Once a face is successfully identified, relevant data is transmitted to the Raspberry Pi 4B.

Data Transmission and Decoding:

  • Use the UART serial port of the Raspberry Pi 4B to send facial data results to the connected STM32.
  • The STM32 decodes the received data using the I2C protocol.

Display and Control:

  • The decoded data is displayed on an OLED screen connected to the STM32, showcasing real-time facial recognition results.
  • The STM32 decides whether to execute the door-opening operation based on actual conditions, and this decision is implemented by driving the SG90 servo motor to control the door.

Smart access control system workflow developed with Raspberry Pi 4B and STM32

Face Recognition With Raspberry Pi and OpenCV

Facial Information Collection

We first need to collect facial information of passable people and execute the following Python code:

				
					import cv2
import os
if __name__ == "__main__":
    str_face_id = ""
    index_photo = 0
    # Load the pre-trained face detector
    faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
    # Open the camera
    cap = cv2.VideoCapture(0)
    while True:
        # Check if the face ID is empty, create face_id if so
        if not str_face_id.strip():
            str_face_id = input('Enter your face ID:')
            index_photo = 0
            if not os.path.exists(str_face_id):
                os.makedirs(str_face_id)
        # Read a frame from the camera
        success, img = cap.read()
        if not success:
            continue
        # Convert to grayscale
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        # Perform face detection
        faces = faceCascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(50, 50),
                                             flags=cv2.CASCADE_SCALE_IMAGE)
        # Draw rectangles around detected faces
        for (x, y, w, h) in faces:
            cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 3)
        # Display the detection results
        cv2.imshow("FACE", img)
        # Read the key value
        key = cv2.waitKey(1) & 0xFF
        # Press "c" key to capture faces
        if key == ord('c'):
            # Save the captured faces
            for (x, y, w, h) in faces:
                roi = img[y:y+h, x:x+w]
                cv2.imwrite("%s/%d.jpg" % (str_face_id, index_photo), roi)
                index_photo += 1
            key = 0
        # Press "x" key to switch face_id
        elif key == ord('x'):
            str_face_id = ""
            key = 0
        # Press "q" key to quit
        elif key == ord('q'):
            break
    cap.release()
    cv2.destroyAllWindows()
				
			

The above code uses OpenCV’s face detector to capture face images in a live video stream. Press key “c” to save the image, press key “x” to toggle face ID, and press key “q” to exit.

Facial Information Collection

Build Facial Feature Library

After collecting the face data, use LBP to build a face feature library. Here is the relevant code:

				
					import cv2
import os
import numpy as np
# Get all folders (face IDs)
def get_face_list(path):
    for root, dirs, files in os.walk(path):
        if root == path:
            return dirs
if __name__ == "__main__":
    # Create a face recognizer
    recognizer = cv2.face.LBPHFaceRecognizer_create()
    # Dictionary to store face IDs
    # Build a relationship between face numbers and face IDs
    dic_face = {}
    # Path to store faces
    base_path = "../face-collect/"
    # Get face IDs
    face_ids = get_face_list(base_path)
    print(face_ids)
    
    # List to store face data and IDs
    faceSamples = []
    ids = []
    # Traverse folders named after face IDs
    for i, face_id in enumerate(face_ids):
        # Update face dictionary
        dic_face[i] = face_id
        # Get the path to store face images
        path_img_face = os.path.join(base_path, face_id)
        for face_img in os.listdir(path_img_face):
            # Read files with the suffix ".jpg"
            if face_img.endswith(".jpg"):
                file_face_img = os.path.join(path_img_face, face_img)
                # Read the image and convert it to grayscale
                img = cv2.imread(file_face_img)
                img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
                # Save the image and face ID
                faceSamples.append(img)
                ids.append(i)
    print(dic_face)
    # Train the model
    recognizer.train(faceSamples, np.array(ids))
    # Save the model
    recognizer.save('trainer.yml')
    # Save the dictionary
    with open("face_list.txt", 'w') as f:
        for face_id in dic_face:
            f.write("%d %s\n" % (face_id, dic_face[face_id]))
				
			

The above code traverses the face data set, builds the LBPH face recognizer, and saves the model (trainer.yml) and face ID dictionary (face_list.txt).

Build Facial Feature Library

Face Recognition

Finally, face recognition is achieved by comparing the face LBP feature vector collected by the camera with the face LBP feature vector existing in the library.

				
					import cv2
import os
import numpy as np
import serial
# Open a serial connection to communicate with external devices
ser = serial.Serial('/dev/ttyAMA0', 115200)
# Function to read face dictionary from a file
def read_dic_face(file_list):
    data = np.loadtxt(file_list, dtype='str')
    dic_face = {}
    for i in range(len(data)):
        dic_face[int(data[i][0])] = data[i][1]
    return dic_face
if __name__ == "__main__":
    # Load the face dictionary
    dic_face = read_dic_face("face_list.txt")
    print(dic_face)
    # Load OpenCV face detector
    faceCascade = cv2.CascadeClassifier('../face-collect/haarcascade_frontalface_alt.xml')
    # Load the trained face recognizer
    recognizer = cv2.face.LBPHFaceRecognizer_create()
    recognizer.read('trainer.yml')
    # Open the camera
    cap = cv2.VideoCapture(0)
    while True:
        # Read a frame
        success, img = cap.read()
        if not success:
            continue
        # Convert to grayscale
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        # Perform face detection
        faces = faceCascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(50, 50),
                                             flags=cv2.CASCADE_SCALE_IMAGE)
        # Send a signal to an external device
        ser.write(str("idea").encode())
        # Traverse detected faces
        for (x, y, w, h) in faces:
            # Draw a rectangle
            cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 3)
            # Perform face recognition
            id_face, confidence = recognizer.predict(gray[y:y+h, x:x+w])
            print(confidence)
            # Check confidence level, computed by measuring distance; lower confidence means closer match
            if (confidence < 100):
                str_face = dic_face[id_face]
                str_confidence = "  %.2f" % (confidence)
            else:
                str_face = "unknown"
                id_face = 2
                str_confidence = "  %.2f" % (confidence)
            ser.write(str(id_face).encode())
            # Display recognition result as text
            cv2.putText(img, str_face + str_confidence, (x+5, y-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
        # Display detection results
        cv2.imshow("FACE", img)
        # Press "q" to exit
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    cap.release()
				
			

The above code combines face detection, LBPH face recognition and serial communication to implement an intelligent access control system based on LBP.

Face recognition error

Configuration STM32

STM32CubeMX

  • Configuration of External HSE in STM32 RCC

RCC is a register in the STM32 microcontroller used to configure the system clock. Configuring an external HSE oscillator as the system clock source can enhance the precision of the system clock.

Configuration of External HSE in STM32 RCC

  • SYS Configuration

The SYS module handles system-level configurations. It is recommended to set the Debug mode to Serial Wire. This is a debugging interface used for observing the chip’s operating status and debugging, helping to avoid potential risks of chip self-locking.

TIM2 Configuration

  • TIM2 Configuration

The TIM module in STM32 is used for generating timing and pulse-width modulation (PWM) signals. Here, TIM2’s Channel 1 is configured as PWM output to control devices such as the SG90 servo motor.

TIM2 Configuration

  • USART1 Configuration

The USART module is used for serial communication. USART1 is configured in UART mode with a baud rate of 115200. UART interrupts are enabled for interrupt handling during data transmission.

USART1 Configuration

  • I2C Configuration

I2C is a serial communication protocol used for connecting microcontrollers and external devices.

I2C Configuration

  • Clock Tree Configuration

The clock tree is a crucial element affecting the overall performance of embedded systems.

Clock Tree Configuration

  • Project Configuration

This involves setting parameters for the entire embedded project, including compile options, linker scripts, and library linking.

Project Configuration

STM32 Code

OLED

The OLED module is utilized to present the results indicating whether passage is allowed on the OLED screen. This functionality encompasses two primary components: string display functions and Chinese character display functions (or customization for your country’s language; here, Chinese is used).

				
					// Parameter description: x, y -- starting coordinates (x:0~127, y:0~7); ch[] -- string to be displayed; TextSize -- character size (1:6*8; 2:8*16)
// Description: Display ASCII characters from codetab.h, with options for 6*8 and 8*16 sizes
void OLED_ShowStr(unsigned char x, unsigned char y, unsigned char ch[], unsigned char TextSize) {
    unsigned char c = 0, i = 0, j = 0;
    switch(TextSize) {
        case 1: {
            while(ch[j] != '\0') {
                c = ch[j] - 32;
                if(x > OLED_MAX_X) {
                    x = 0;
                    y++;
                }
                OLED_SetPos(x, y);
                for(i = 0; i < 6; i++)
                    WriteDat(F6x8[c][i]);
                x += 6;
                j++;
            }
        } break;
        case 2: {
            while(ch[j] != '\0') {
                c = ch[j] - 32;
                if(x > OLED_MAX_X - 8) {
                    x = 0;
                    y++;
                }
                OLED_SetPos(x, y);
                for(i = 0; i < 8; i++)
                    WriteDat(F8X16[c * 16 + i]);
                OLED_SetPos(x, y + 1);
                for(i = 0; i < 8; i++)
                    WriteDat(F8X16[c * 16 + i + 8]);
                x += 8;
                j++;
            }
        } break;
    }
}
### Chinese Character Display Function
```c
// Parameter description: x, y -- starting coordinates (x:0~127, y:0~7); N -- index of the Chinese character in .h file
// Description: Display Chinese characters from ASCII_8x16.h, with a 16*16 matrix
void OLED_ShowCN(unsigned char x, unsigned char y, unsigned char N) {
    unsigned char wm = 0;
    unsigned int adder = 32 * N;
    OLED_SetPos(x, y);
    for(wm = 0; wm < 16; wm++) {
        WriteDat(F16x16[adder]);
        adder += 1;
    }
    OLED_SetPos(x, y + 1);
    for(wm = 0; wm < 16; wm++) {
        WriteDat(F16x16[adder]);
        adder += 1;
    }
}
// Display Chinese String Function
void OLED_ShowCN_STR(u8 x, u8 y, u8 begin, u8 num) {
    u8 i;
    for(i = 0; i < num; i++) {
        OLED_ShowCN(i * 16 + x, y, i + begin);
    }
}
				
			

UART

The Raspberry Pi 4B transmits data to the STM32 in the form of strings, even when the underlying data is essentially numerical. These numeric data are represented in character form during the transmission process. Since UART communication takes place through a serial port, and the serial port transfers data on a byte-by-byte basis, the transmitted numeric data is converted into the corresponding character representation during transmission.

uart.h: 

				
					#ifndef __UART_H
#define __UART_H
#include "stm32f1xx_hal.h"
extern UART_HandleTypeDef huart1;  // External declaration of UART handle for USART1
#define USART1_REC_LEN  600  // Maximum length of the USART1 receive buffer
extern int  USART1_RX_BUF[USART1_REC_LEN];  // USART1 receive buffer
extern uint16_t USART1_RX_STA;  // USART1 receive status
extern int USART1_NewData;  // Flag indicating new data received
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart);  // Callback function for USART1 receive interrupt
#endif
				
			

uart.c: 

				
					#include "uart.h"
#include "oled.h"
int USART1_RX_BUF[USART1_REC_LEN];   // Target data buffer for USART1 reception
uint16_t USART1_RX_STA = 0;  // USART1 receive status
int USART1_NewData;
extern int num;  // External variable to store the received data
// USART1 receive complete callback function
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart) {
    if (huart == &huart1) {
        // Store the received data in the buffer
        USART1_RX_BUF[USART1_RX_STA & 0X7FFF] = USART1_NewData;
        USART1_RX_STA++;
        // Reset the status if it exceeds the buffer length
        if (USART1_RX_STA > (USART1_REC_LEN - 1))
            USART1_RX_STA = 0;
        // Update the external variable with the latest received data
        num = USART1_RX_BUF[USART1_RX_STA - 1];
        // Set up the UART to receive the next byte
        HAL_UART_Receive_IT(&huart1, (uint8_t *)&USART1_NewData, 1);
    }
}
				
			

Control

The control section of the code is primarily responsible for decoding the data sent by the Raspberry Pi 4B. After decoding, relevant information is displayed on the OLED screen based on the index value, and the SG90 servo motor is activated to open the door. It is noteworthy that, to prevent a user from repeatedly entering the room, a variable ‘i’ is introduced to introduce a delay in judgment. This design is expected to enhance the effectiveness of the smart access control system.

				
					#include "control.h"
#include "uart.h"
#include "tim.h"
#include "oled.h"
int num;      // Received numerical data
int value;    // Translated index value
int i = -1;    // Prevents overlapping communication shadows
void SmartAccess() {
    value = num - 48; // Convert ASCII numerical value sent by Raspberry Pi 4B to dictionary index
    if (value == 0 && i != 0) {  // Liu Dehua
        OLED_ShowCN_STR(40, 4, 0, 3);
        __HAL_TIM_SET_COMPARE(&htim2, TIM_CHANNEL_1, 150);
        HAL_Delay(3000);
        __HAL_TIM_SET_COMPARE(&htim2, TIM_CHANNEL_1, 50);
        HAL_Delay(3000);
        i = 0;
        value = 4;
    }
    if (value == 1 && i != 1) {  // Jet Li
        OLED_ShowCN_STR(40, 4, 3, 3);
        __HAL_TIM_SET_COMPARE(&htim2, TIM_CHANNEL_1, 150);
        HAL_Delay(3000);
        __HAL_TIM_SET_COMPARE(&htim2, TIM_CHANNEL_1, 50);
        HAL_Delay(3000);
        i = 1;
        value = 4;
    }
    if (value == 2) {  // Unknown index value
        OLED_ShowStr(40, 4, "Unknown", 2);
    }
    if (value == 4) {
        OLED_ShowStr(40, 4, "FreeTm", 2);
        HAL_Delay(3000);
        i = -1;
    }
}
				
			

Main

				
					/**
  * @brief  The application entry point.
  * @retval int
  */
int main(void) {
  /* USER CODE BEGIN 1 */
	int num = 0;  // Variable to store numerical data
  /* USER CODE END 1 */
  /* MCU Configuration--------------------------------------------------------*/
  /* Reset of all peripherals, Initializes the Flash interface and the Systick. */
  HAL_Init();
  /* USER CODE BEGIN Init */
  /* USER CODE END Init */
  /* Configure the system clock */
  SystemClock_Config();
  /* USER CODE BEGIN SysInit */
  /* USER CODE END SysInit */
  /* Initialize all configured peripherals */
  MX_GPIO_Init();
  MX_TIM2_Init();
  MX_USART1_UART_Init();
  MX_I2C2_Init();
  /* USER CODE BEGIN 2 */
	OLED_Init();
	OLED_CLS();
	HAL_UART_Receive_IT(&huart1, (uint8_t *)&USART1_RX_BUF, 1);
	HAL_TIM_PWM_Start(&htim2, TIM_CHANNEL_1);
	OLED_ShowStr(10, 2, "Target Person", 2);  // Display a header on the OLED
  /* USER CODE END 2 */
  /* Infinite loop */
  /* USER CODE BEGIN WHILE */
  while (1) {
    /* USER CODE END WHILE */
    /* USER CODE BEGIN 3 */
		SmartAccess();  // Execute the SmartAccess function
  }
  /* USER CODE END 3 */
}

				
			

Ending

The current version of the smart access control system project is merely a prototype, and there is ample room for improvement. For instance, the adoption of the conventional LBP method for facial recognition is somewhat outdated. Therefore, for interested readers, it is recommended to explore the use of contemporary deep learning network models for facial recognition to enhance the system’s recognition performance.

Additionally, some readers may have noticed that the system currently allows the use of smartphone photos to open the access control, posing certain security risks. In conventional access control systems, liveness detection is already integrated to enhance security.

You Might Be Interested

raspberry pi autostart
How to Auto Start Programs on Raspberry Pi

Automating program startup on the Raspberry Pi can be achieved through various methods. Editing the “/etc/rc.local” file or using desktop applications, while simpler, may not

raspberry pi
What is Raspberry Pi?

Raspberry Pi, a revolutionary single-board computer introduced by the Raspberry Pi Foundation, has become a global sensation, initially designed for educational purposes. With its integrated

Scroll to Top