Social Distancing Detector with OpenCV in Raspberry Pi 4

During the era of Covid-19, social distancing has proven to be an efficient method of reducing the spread of contagious viruses. It is recommended that people avoid close contact as much as possible because of the potential for disease transmission. Many public spaces, including workplaces, banks, bus terminals, train stations, etc., struggle with the issue of keeping a safe distance.

The previous guide covered the steps necessary to connect the PCF8591 ADC/DAC Analog Digital Converter Module to a Raspberry Pi 4. On our Terminal, we saw the results displayed as integers. We dug deeper into the topic, figuring out exactly how the ADC produces its output signals. In this article, however, we will use OpenCV and a Raspberry Pi to create a system that can detect when people are trying to avoid eye contact with one another. We will employ the YOLO version 3 Object Recognition Algorithm's weights to implement the Deep Neural Networks component. Compared to other controllers, the Raspberry Pi always comes out as the best option for image processing tasks. Previous efforts utilizing Raspberry Pi for advanced image processing included a face recognition application.


Where To Buy?
No.ComponentsDistributorLink To Buy
1Jumper WiresAmazonBuy Now
2PCF8591AmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

Components 

  • Raspberry Pi 4

Only a Raspberry pi 4 having OpenCV pre-installed will do for this purpose. Digital image processing is handled with OpenCV. Digital Image Processing is often used for people counting, facial identification, and detecting objects in images.

YOLO

The savvy YOLO (You Only Look Once) Convolution neural networks (CNN) in real-time Object Detection are invaluable. The most recent version of YOLO, YOLOv3, is a fast and accurate object identification algorithm that can identify eighty distinct types of objects in both still and moving media. The algorithm first runs a unique neural net over the entire image before breaking it up into areas and computing border boxes and probability for each. The YOLO base model has a 45 fps real-time frame rate for processing photos. Compared to other detection approaches, such as SSD and R-CNN, the YOLO model is superior.

In the past, computers relied on input devices like keyboards and mice; today, they can also analyze data from visual sources like photos and videos. Computer Vision is a computer's (or a machine's) capacity to read and interpret graphic data. Computing vision has advanced to the point that it can now evaluate the nature of people and objects and even read their emotions. This is feasible because of deep learning and artificial intelligence, which allow an algorithm to learn from examples like recognizing relevant features in an unlabeled image. The technology has matured to the point where it can be employed in critical infrastructure protection, hotel management, and online banking payment portals.

OpenCV is the most widely used computer vision library. It is a free and open-source Intel cross-platform library that may be used with any OS, including Windows, Mac OS X, and Linux. This will make it possible for OpenCV to function on a mobile device like a Pi, which will have a wide range of applications. Let's dive in, then.

Setup of OpenCV on a Raspberry Pi 4

OpenCV and its prerequisites won't run without updating the Raspberry Pi to the latest version. To install the most recent software for your Raspberry Pi, type in the following commands:

sudo apt-get update

Then, use the scripts below to set up the prerequisites on your RPi so you can install OpenCV.

sudo apt-get install libhdf5-dev -y 

sudo apt-get install libhdf5-serial-dev –y 

sudo apt-get install libatlas-base-dev –y 

sudo apt-get install libjasper-dev -y 

sudo apt-get install libqtgui4 –y

sudo apt-get install libqt4-test –y

Finally, run the following lines to install OpenCV on your Raspberry Pi.

pip3 install OpenCV-contrib-python==4.1.0.25

Setting up Opencv using Cmake

OpenCV's installation on a Raspberry Pi can be nerve-wracking because it takes a long time, and there's a good possibility you'll make a mistake. Given my own experiences with this, I've tried to make this lesson as straightforward and helpful as possible so that you won't have to go through the same things I did. Even though OpenCV 4.0.1 had been out for three months when I started writing this lesson, I decided to use the older version (4.0.0) because of some issues with compiling the newer version.

This approach involves retrieving OpenCV's source package and compiling it on a Raspberry Pi with the help of CMake. Installing OpenCV in a virtual environment allows users to run many versions of Python and OpenCV on the same computer. But I'm not going to do that since I'd rather keep this essay brief and because I don't anticipate that it will be required any time soon.

Step 1: Before we get started, let's ensure that our system is up to date by executing the command below:

sudo apt-get update && sudo apt-get upgrade

If there are updated packages, they should be downloaded and installed automatically. There is a 15-20 minute wait time for the process to complete.

Step 2: We must now update the apt-get package to download CMake.

sudo apt-get update

Step 3: When we've finished updating apt-get, we can use the following command to retrieve the CMake package and put it in place on our machine.

sudo apt-get install build-essential cmake unzip pkg-config

When installing CMake, your screen should look similar to the one below.

Step 4: Then, use the following command to set up Python 3's development headers:

sudo apt-get install python3-dev

Since it was pre-installed on mine, the screen looks like this.

Step 5: The following action would be to obtain the OpenCV archive from GitHub. Here's the command you may use to replicate the effect:

wget -O opencv.zip https://github.com/opencv/opencv/archive/4.0.0.zip

You can see that we are collecting version 4.0.0 right now.

Step 6: The OpenCV contrib contains various python pre-built packages that will make our development efforts more efficient. Therefore, let's also download that with the command that is identical to the one shown below.

wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.0.0.zip

The "OpenCV-4.0.0" and "OpenCV-contrib-4.0.0" zip files should now be in your home directory. If you need to know for sure, you may always go ahead and check it out.

Step 7: Let's extract OpenCV-4.0.0 from its.zip archive with the following command.

unzip opencv.zip

Step 8: Extraction of OpenCV contrib-4.0.0 via the command line is identical.

unzip opencv_contrib.zip

Step 9: OpenCV cannot function without NumPy. Follow the command below to begin the installation.

pip install numpy

Step 10: In our new setup, the home directory would now contain two folders: OpenCV-4.0.0 and OpenCV contrib-4.0.0. Next, we'll make a new directory inside OpenCV-4.0.0 named "build" to perform the actual compilation of the Opencv library. The steps needed to achieve the same result are detailed below.

cd~/opencv

mkdir build

cd build

Step 11: OpenCV's CMake process must now be initiated. In this section, we specify the requirements for compiling OpenCV. Verify that "/OpenCV-4.0.0/build" is in your path. Then, paste the lines below into the Terminal.

cmake -D CMAKE_BUILD_TYPE=RELEASE \

    -D CMAKE_INSTALL_PREFIX=/usr/local \

    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.0.0/modules \

    -D ENABLE_NEON=ON \

    -D ENABLE_VFPV3=ON \

    -D BUILD_TESTS=OFF \

    -D WITH_TBB=OFF \

    -D INSTALL_PYTHON_EXAMPLES=OFF \

    -D BUILD_EXAMPLES=OFF ..

Hopefully, the configuration will proceed without a hitch, and you'll see "Configuring done" and "Generating done" in the output.

If you encounter an issue during this procedure, check to see if the correct path was entered and if the "OpenCV-4.0.0" and "OpenCV contrib-4.0.0" directories exist in the root directory path.

Step 12: This is the most comprehensive process that needs to be completed. Using the following command, you can compile OpenCV, but only if you are in the "/OpenCV-4.0.0/build" directory.

Make –j4

Using this method, you may initiate the OpenCV compilation process and view the status in percentage terms as it unfolds. After three to four hours, you will see a completed build screen.

The command "make -j4" utilizes all four processor cores when compiling OpenCV. Some people may feel impatient waiting for a 99% success rate, but eventually, it will be worth it.

After waiting an hour, I had to cancel the process and rebuild it with "make -j1," which did the trick. It is advisable first to use make j4 since that will utilize all four of pi's cores, and then use make j1, as make j4 will complete most of the compilation.

Step 13: If you are at this point, congratulations. You have made it through the entire procedure with flying colors. The final action is to run the following command to install libopecv.

sudo apt-get install libopencv-dev python-OpenCV

Step 14: Finally, a little python script can be run to verify that the library was successfully installed. Try "import cv2" in Python, as demonstrated below. You shouldn't get any error message when you do this.

Completing the Raspberry Pi Software Installation of Necessary Additional Packages

Let's get the necessary packages set up on the Raspberry Pi before we begin writing the code for the social distance detector.

Installing imutils

utils are designed to simplify the use of OpenCV for standard image processing tasks like translating, rotating, resizing, skeletonizing, and presenting pictures via Matplotlib. If you want to get the imutils, type in the following command:

pip3 install imutils

A Breakdown of the Program

The complete code may be found at the bottom of the page. In this section, we'll walk you through the most crucial parts of the code so you can understand it better. All the necessary libraries for this project should be imported at the beginning of the code.

import numpy as np

import cv2

import imutils

import os

import time

Distances between objects or points in a video frame can be determined with the Check() function. The two things in the picture are represented by the a and b points. The Euclidean distance is determined using these two positions as the starting and ending points.

def Check(a,  b):

    dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5

    calibration = (a[1] + b[1]) / 2     

    if 0 < dist < 0.25 * calibration:

        return True

    else:

        return False

The YOLO weights, configuration file, and COCO names file all have specific locations that can be set in the setup function. The os.path module is everything you need to do ordinary things with pathnames. The os.path.join() sub-module intelligently combines two or more path components. cv2.dnn.read The net is reloaded with the saved weights using the netfromdarknet() function. Once the weights have been loaded, the network layers can be extracted using the getLayerNames model.

def Setup(yolo):

    global neural_net, ln, LABELS

    weights = os.path.sep.join([yolo, "yolov3.weights"])

    config = os.path.sep.join([yolo, "yolov3.cfg"])

    labelsPath = os.path.sep.join([yolo, "coco.names"])

    LABELS = open(labelsPath).read().strip().split("\n") 

    neural_net = cv2.dnn.readNetFromDarknet(config, weights)

    ln = neural_net.getLayerNames()

    ln = [ln[i[0] - 1] for i in neural_net.getUnconnectedOutLayers()]

In the image processing section, we extract a still image from the video and analyze it to find the distance between the people in the crowd. The function's first two lines specify an empty string for both the width and height of the video frame. To process many images simultaneously, we utilized the cv2.dnn.blobFromImage() method in the following line. The blob function adjusts a frame's Mean, Scale, and Channel.

(H, W) = (None, None)

    frame = image.copy()

    if W is None or H is None:

        (H, W) = frame.shape[:2]

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

    neural_net.setInput(blob)

    starttime = time.time()

    layerOutputs = neural_net.forward(ln)

YOLO's layer outputs are numerical values. With these numbers, we may determine which objects belong to which classes with greater precision. To identify persons, we iterate over all layerOutputs and assign the "person" class label to each. Each detection generates a bounding box whose output includes the coordinates of the detection's center on X and Y as well as its width and height.

            scores = detection[5:]

            maxi_class = np.argmax(scores)

            confidence = scores[maxi_class]

            if LABELS[maxi_class] == "person":

                if confidence > 0.5:

                    box = detection[0:4] * np.array([W, H, W, H])

                    (centerX, centerY, width, height) = box.astype("int")

                    x = int(centerX - (width / 2))

                    y = int(centerY - (height / 2))

                    outline.append([x, y, int(width), int(height)])

                    confidences.append(float(confidence))

Then, determine how far apart the middle of the active box is from the centers of all other boxes. If the rectangles overlap only a little, set the value to "true."

for i in range(len(center)):

            for j in range(len(center)):

                close = Check(center[i], center[j])

                if close:

                    pairs.append([center[i], center[j]])

                    status[i] = True

                    status[j] = True

        index = 0

In the following lines, we'll use the model's box dimensions to create a square around the individual and evaluate whether or not they are in a secure area. If there is little space between the boxes, the box's color will be red; otherwise, it will be green.

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            if status[index] == True:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[index] == False:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

Now we're inside the iteration function, where we're reading each film frame and analyzing it to determine how far apart the people are.

ret, frame = cap.read()

    if not ret:

        break

    current_img = frame.copy()

    current_img = imutils.resize(current_img, width=480)

    video = current_img.shape

    frameno += 1

    if(frameno%2 == 0 or frameno == 1):

        Setup(yolo)

        ImageProcess(current_img)

        Frame = processedImg

In the following lines, we'll utilize the opname-defined cv2.VideoWriter() function to save the output video to the provided place.

if create is None:

            fourcc = cv2.VideoWriter_fourcc(*'XVID')

            create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)

    create.write(Frame)

Testing the program

When satisfied with your code, launch a terminal on your Pi and go to the directory where you kept it. The following folder structure is recommended for storing the code, Yolo framework, and demonstration video.

The yoloV3 directory is downloadable from the;

https://pjreddie.com/media/files/yolov3.weights

videos from:

https://www.pexels.com/search/videos/pedestrians/

Finally, paste the Python scripts provided below into the same folder as the one displayed above. The following command must be run once you've entered the project directory:

python3 detector.py

I applied this code to a sample video I found on pexels, and the results were interesting. The frame rate was terrible, and the film played back in almost 11 minutes.

Changing line 98 from cv2.VideoCapture(input) to cv2.VideoCapture(0) allows you to test the code without needing a video. Follow these steps to utilize OpenCV on a Raspberry Pi to identify inappropriate social distances.

Complete code

import numpy as np

import cv2

import imutils

import os

import time

def Check(a,  b):

    dist = ((a[0] - b[0]) ** 2 + 550 / ((a[1] + b[1]) / 2) * (a[1] - b[1]) ** 2) ** 0.5

    calibration = (a[1] + b[1]) / 2       

    if 0 < dist < 0.25 * calibration:

        return True

    else:

        return False

def Setup(yolo):

    global net, ln, LABELS

    weights = os.path.sep.join([yolo, "yolov3.weights"])

    config = os.path.sep.join([yolo, "yolov3.cfg"])

    labelsPath = os.path.sep.join([yolo, "coco.names"])

    LABELS = open(labelsPath).read().strip().split("\n")  

    net = cv2.dnn.readNetFromDarknet(config, weights)

    ln = net.getLayerNames()

    ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

def ImageProcess(image):

    global processedImg

    (H, W) = (None, None)

    frame = image.copy()

    if W is None or H is None:

        (H, W) = frame.shape[:2]

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

    net.setInput(blob)

    starttime = time.time()

    layerOutputs = net.forward(ln)

    stoptime = time.time()

    print("Video is Getting Processed at {:.4f} seconds per frame".format((stoptime-starttime))) 

    confidences = []

    outline = []

    for output in layerOutputs:

        for detection in output:

            scores = detection[5:]

            maxi_class = np.argmax(scores)

            confidence = scores[maxi_class]

            if LABELS[maxi_class] == "person":

                if confidence > 0.5:

                    box = detection[0:4] * np.array([W, H, W, H])

                    (centerX, centerY, width, height) = box.astype("int")

                    x = int(centerX - (width / 2))

                    y = int(centerY - (height / 2))

                    outline.append([x, y, int(width), int(height)])

                    confidences.append(float(confidence))

    box_line = cv2.dnn.NMSBoxes(outline, confidences, 0.5, 0.3)

    if len(box_line) > 0:

        flat_box = box_line.flatten()

        pairs = []

        center = []

        status = [] 

        for i in flat_box:

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            center.append([int(x + w / 2), int(y + h / 2)])

            status.append(False)

        for i in range(len(center)):

            for j in range(len(center)):

                close = Check(center[i], center[j])

                if close:

                    pairs.append([center[i], center[j]])

                    status[i] = True

                    status[j] = True

        index = 0

        for i in flat_box:

            (x, y) = (outline[i][0], outline[i][1])

            (w, h) = (outline[i][2], outline[i][3])

            if status[index] == True:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 150), 2)

            elif status[index] == False:

                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

            index += 1

        for h in pairs:

            cv2.line(frame, tuple(h[0]), tuple(h[1]), (0, 0, 255), 2)

    processedImg = frame.copy()

create = None

frameno = 0

filename = "newVideo.mp4"

yolo = "yolov3/"

opname = "output2.avi"

cap = cv2.VideoCapture(filename)

time1 = time.time()

while(True):

    ret, frame = cap.read()

    if not ret:

        break

    current_img = frame.copy()

    current_img = imutils.resize(current_img, width=480)

    video = current_img.shape

    frameno += 1

    if(frameno%2 == 0 or frameno == 1):

        Setup(yolo)

        ImageProcess(current_img)

        Frame = processedImg

        cv2.imshow("Image", Frame)

        if create is None:

            fourcc = cv2.VideoWriter_fourcc(*'XVID')

            create = cv2.VideoWriter(opname, fourcc, 30, (Frame.shape[1], Frame.shape[0]), True)

    create.write(Frame)

    if cv2.waitKey(1) & 0xFF == ord('s'):

        break

time2 = time.time()

print("Completed. Total Time Taken: {} minutes".format((time2-time1)/60))

cap.release()

cv2.destroyAllWindows()

Why is Social Distancing Detection helpful?

  1. Convincing Workers

Since 41% of workers won't return to their desks until they feel comfortable, installing social distancing detection is an excellent approach to reassure them that the situation has been rectified. People without fevers can still be contagious; hence this solution is preferable to thermal imaging cameras.

  1. Space Utilization

Using the detection program, you can find out which places in the workplace are the most popular. As a result, you'll have all the information you need to implement the best precautions.

  1. The Practice of Keeping Tabs and Taking Measures

The software can also be connected to security video systems outside the workplace, such as in a factory where workers are frequently close to one another. To be able to keep an eye on the office atmosphere and single out those whose personal space is too close to others.

  1. Tracking the Queues

Queue monitoring is a valuable addition to security cameras for businesses in retail, healthcare, and other sectors, where waiting in line is unnecessary. As a result, the cameras will be able to monitor and recognize whether or not people are following the social distance requirements. The system can be configured to function with automatic barricades and digital billboards to provide real-time alerts and health and security information.

Consequences of Isolation

The adverse effects of social isolation include the following:

  • Its efficacy decreases when mosquitoes, infected food or water, or other vectors are predominantly responsible for spreading disease.

  • If a person isn't used to being in a social setting, they may become lonely and depressed.

  • Productivity drops, and other benefits of interacting with other people are lost.

Conclusion

This tutorial showed us how to build a social distance detection system. This technology makes use of AI and deep learning to analyze visual data. Incorporating computer vision allows for accurate distance calculations between people. A red box will appear around any group that violates the minimum acceptable threshold value. The system's designers used previously shot footage of a busy roadway to build their algorithm. The system can determine an approximation of the distance between individuals. In social interactions, there are two types of space between people: the "Safe" and "Unsafe" distances. In addition, it shows labels according to item detection and classification. The classifier may be utilized to create real-time applications and put into practice live video streams. During pandemics, this technology can be combined with CCTV to keep an eye on the public. Since it is practical to conduct such screening of the mass population, they are routinely implemented in high-traffic areas such as airports, bus terminals, markets, streets, shopping mall entrances, campuses, and even workplaces and restaurants. Keeping an eye on the distance between two people allows us to ensure sufficient space is maintained between them.

Interface PCF8591 ADC/DAC Analog Digital Converter Module with Raspberry Pi 4

Welcome back to another Python tutorial for the Raspberry Pi 4! The previous tutorial showed us how to construct a Raspberry Pi-powered cell phone with a microphone and speaker for making and receiving calls and reading text messages (SMS). To make our Raspberry Pi 4 into a fully functional smartphone, we built software in Python. As we monitored text and phone calls being sent and received between the raspberry pi and our mobile phone, we experienced no technical difficulties. But in this tutorial, you'll learn how to hook up the PCF8591 ADC/DAC module to a Raspberry Pi 4.

Since most sensors only output their data in analog values, converting them to binary values that a microcontroller can understand is a crucial part of any integrated electronics project. A microcontroller's ability to process analog data necessitates using an analog-to-digital converter.

Some microcontrollers, including the Arduino, MSP430, and PIC16F877A, contain an onboard analog-to-digital converter (ADC), whereas others, like the 8051 and Raspberry Pi, do not.

Where To Buy?
No.ComponentsDistributorLink To Buy
1Jumper WiresAmazonBuy Now
2PCF8591AmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

Required Components

  1. Raspberry-pi 4

  2. PCF8591 ADC Module

  3. 100K Pot

  4. Jumper wires

You are expected to have a Raspberry Pi 4 with the most recent version of Raspbian OS installed on it, and that you are familiar with using a terminal program like putty to connect to the Pi via the Internet and access its file system remotely. Those unfamiliar with Raspberry Pi can learn the basics by reading the articles below.

PCF8591 ADC/DAC Module

Each of the ten pins on the PCF8591 module may read analog values as high as 256 on the PCF8591's digital side or vice versa. The board has a thermistor and LDR circuit. Input and output from this module are both analogs. To facilitate the I2C protocol, it has a dedicated serial clock and serial data address pins. The supply voltage ranges from 2.5 to 6V, and the stand-by current is minimal. We can further turn the module's potentiometer knob to control the input voltage. A total of three jumpers can be found on the board. Switching between the thermistor, LDR/photoresistor, and adjustable voltage access circuits is possible by connecting J4, J5, and J6. D1 and D2 are two LEDs on the board, with D1 displaying the strength of the output voltage and D2 indicating the power of the supply voltage. When the supply or output voltage is increased, the brightness of LEDs D1 and D2 are correspondingly enhanced. Potentiometers connected to the LEDs' VCC or AOUT pins also allow testing.

Microprocessors, Arduinos, Raspberry Pis, and other digital logic circuits can interact with the physical environment thanks to Analogue-to-Digital Converters (ADCs). Many digital systems gather information about their settings by analyzing the analog signals produced by transducers such as microphones, light detectors, thermometers, and accelerometers. These signals constantly vary in value since they are derived from the physical world.

Digital circuits use binary signals, which can only be in one of two states, "1" (HIGH) or "0" (LOW), as opposed to the infinitely variable voltage values provided by analog signals (LOW). Therefore, Analogue-to-Digital Converters (A/D) is an essential electronic circuit for translating between constantly varying analog impulses and discrete digital signals.

To put it simply, an analog-to-digital converter (ADC) is a device that, given a single instantaneous reading of an analog voltage, generates a unique digital output code that stands in for that reading. The precision of an A/D converter determines how many binary digits, or bits, are utilized to represent the original analog voltage value.

Analogue and Digital Signals

By rotating the potentiometer's wiper terminal between 0 and VMAX, we may see a continuous output signal with an endless set of output values related to the wiper position. In a potentiometer, the output voltage constantly varies while the wiper is moved between fixed positions. Variations in temperature, pressure, liquid levels, and brightness are all examples of analog signals.

A digital circuit uses a single rotary switch to control the potential divider network, taking the place of the potentiometer's wiper at each node. The output voltage, VOUT, rapidly transitions from one node to the next as the switch is turned, with each node's value representing a multiple of 1.0 volts.

The output is guaranteed at 2-volt, 3-volt, 5 volts, etc., but NOT a 2.5-volt, 3.1-volt, or 4.6-volt output. Using a multi-position switch and more resistive components in the voltage-divider network, resulting in more discrete switching steps, would allow for generating finer output voltage levels.

By this definition, we can see that a digital signal has discrete (step-by-step) values, while an analog signal's values change continuously over time. We are going from "LOW" to "HIGH" or "HIGH" to "LOW."

So the question becomes how to transform an infinitely variable signal into one with discrete values or steps that a digital circuit can work with.

Converting from Analog to Digital

Although several commercially available analog-to-digital converter (ADC) chips exist, such as the ADC08xx family, for converting analog voltage signals to their digital equivalents, a primary ADC can be constructed out of discrete components.

Using comparators to detect various voltage levels and output their switching signal state to an encoder is a straightforward method known as parallel encoding, flash encoding, simultaneous encoding, or multiple comparator converters.

The equivalence output script for a given n-bit resolution is formed by a chain network of accuracy resistors and a series of comparators that are connected but equally spaced.

As soon as an analog signal is provided to the comparator input, it is evaluated with a reference voltage, making parallel converters advantageous because of their ease of construction and lack of need for timing clocks. The following comparator circuit may be of interest.

A Logic Comparator

The LM339N is an analog comparator that compares the relative magnitudes of two voltage levels via its two analog inputs (one positive and one negative).

The comparator receives two signals, one representing the input voltage (VIN) and the other representing the reference value (VREF). The comparator's digital circuits state, "1" or "0," is determined by comparing two output voltages at the input of the comparator.

One input (VREF) receives a reference voltage, and the other input (VIN) receives the input voltage to be compared to it. Output is "OFF" by an LM339 comparator when the input power is lower than (VIN VREF) and "ON" when the input power is higher than the standard voltage (VIN > VREF). A comparator is a device to determine which of two voltages is greater.

Using the potential divider network established by R1 and R2, we can calculate VREF. If the two resistors are identical in value (R1 = R2), then the reference voltage will be half the input power (V/2). Therefore, like with a 1-bit ADC, the output of an open-collector comparator is HIGH if VIN is lower than V/2 and LOW otherwise.

However, by increasing the number of resistors in the voltage divider circuit, we can "divide" the voltage source by an amount equal to the ratio of the resistors' resistances. However, the number of comparators needed increases with the number of resistors in the voltage-divider network.

For an "n"-bit binary output, where "n" is commonly between 8 and 16 bits, a 2n- 1 comparator would be needed in general. As we saw previously, the comparator utilized by the one-bit ADC to determine whether or not VIN was more significant than the V/2 voltage output was 21 minus 1, which equals 1.

If we want to build a 2-bit ADC, we'll need 22-1 or "3" comparators since the 4-to-2-bit encoder circuitry depicted above requires four distinct voltage levels to represent the four digital values.

Circuit for 2-bit A/D Conversion

For each of the four potential values of the analog input of:

A/D Conversion Output, 2-Bit

Where X is a "don't care" statement, representing a logical 0 or 1.

Explain how this analog-to-digital device operates. An analog-to-digital converter (A/D) must generate a faithful digital copy of the Analog input signal to be of any value. To keep things straightforward, we've assumed that VIN is somewhere between 0 and 4 volts and have adjusted VREF and the voltage divider network so that there is a 1 V drop between each resistor in this simple 2-bit Analog - to - digital example.

A binary zero (00) is output by the encoder on pins Q0 and Q1 when the input voltage, VIN, is less than the reference voltage level, which occurs when VIN is between 0 and 1 volts (1V). Since comparator U1's reference voltage input is set to 1 volt, when VIN rises above 1 volt but is below 2 volts, U1's HIGH output is triggered. When the input changes at D1, the priority encoder, used for the 4-to-2-bit encoding, generates a binary result of "1." (01).

Remember that the inputs of a Priority Encoder, like the TTL 74LS148, are all assigned different priority levels. The highest priority input is always used as the output of the priority encoder. So, when a higher priority input is available, lesser priority inputs are disregarded. Therefore, if there are many inputs simultaneously at logic state "1", only the input with high priority will have its output code reflected on D0 and D1.

Thus, now that VIN is greater than 2 volts—the next reference voltage level—comparator U2 will sense the difference and output HIGH. However, when VIN is more than 3 volts, the priority encoder will output a binary "3" (11), as input D2 has a high priority than inputs D0 and D1. Each comparator outputs a HIGH or LOW state to the encoder, generating 2-bit binary data between 00 and 11 as VIN decreases or changes between every reference voltage level.

This is great and all, but commercially available priority encoders, like the TTL, are 8-bit circuits, and if we use one of these, six of the binary numbers will go unused. A digital Ex-OR gate and a grid of signaling diodes can create a straightforward encoder circuit.

Diode-based 2-bit ADC

Before feeding the diodes, the results of the comparators go through an Exclusive-OR gate to be encoded. Whenever the diode is reverse biased, an external pull-down resistor is connected between the diodes' outputs and ground (0V) to maintain a LOW state and prevent the outputs from floating.

Also, as with the main board, the value of VIN controls which comparator sends a HIGH (or LOW) signal to the exclusive-OR gates, which provide a HIGH output if either of the inputs is HIGH but not both (the corresponding Boolean is Q = A.B + A.B). The AND-OR-NAND gates of combinational logic could also be used to build these Ex-OR gates.

The difficulty with both of these 4-to-2 converter designs is that the input analog voltage at VIN needs to vary by one full volt for the encoder to vary its output code, limiting the precision of the simple two-bit A/D converter to 1 volt. The output resolution can be improved by employing more comparators to convert to a three-bit A/D converter.

D/A Converter, 3-Bit

The aforementioned parallel ADC takes a voltage reading between 0 and over 3 volts as an analog input and turns it into a binary code with only 2 bits. Since there are 23 = 8 possible digital outputs from a 3-bit digital circuits system, the input analog voltage can be compared to a scale of eight voltages, each of which is one-eighth (1/8) of the voltage supply. This means that we can now measure to an accuracy of 0.5 (4/8) volts and that 23-1 comparators are needed to generate a binary code with a 3-bit resolution (from 000 (0) to 111 (7)).

Circuit for 3-bit Analog-to-Digital Conversion

This will provide us with a three-bit code for each of the eight potential values of the analog input of:

The result of a Three-Bit Analog-to-Digital Converter

An "X" may be a logic 0 or a logic 1 to indicate a "don't care" state.

Then we can see that more comparators and power levels are required and more output binary bits when the ADC's resolution is increased.

Therefore, an analog-to-digital converter with a 4-bit resolution needs only 15 (24-1) comparators. An eight-bit resolution requires 255 (28-1) comparators. A 10-bit resolution needs 1023 comparators, etc. Therefore, the complexity of this type of Analog-to-Digital Converter circuit increases as the number of output bits increases.

Only if a few binary bits are needed to make a read on a display unit to represent the reference voltage of an input analog signal can a parallel or flashed A/D converter quickly be developed as part of a project due to its fast real-time conversion rate.

As an input interface circuit component, an analog signal from sensors or transducers is converted into a digital binary code by an analog-to-digital converter. Similarly, a digital binary code can be converted into a comparable analog quantity using a Digital-to-Analog Conversion for output interfacing to operate a motor or actuator or, more often, in audio applications.

Raspberry Pi's I2C pins

Knowing the Raspberry Pi's I2C port pins and setting up the I2C connection in the pi 4 are the initial steps in using a PCF8591 with the Pi.

GPIO2 and GPIO3 on the Rpi Model are utilized for I2C communication in this guide.

Raspberry Pi I2C Configuration

Raspberry Pi lacks I2C support by default. Therefore, it must be activated before anything else. Turn on Raspberry Pi's I2C port.

  1. First, open a terminal and enter sudo raspi-config.

  2. The RPi 4 Software Configuration Tool has opened.

  3. Third, activate the I2C by selecting Interfacing options.

    1. Restart the Pi after enabling I2C.

    Reading the PCF8591's I2C Address with a Raspberry Pi

    The Raspberry Pi has to know the I2C address of the PCF8591 IC before communication can begin. You may get the address by linking the PCF8591's SDA and SCL pins to the Raspberry Pi's own SDA and SCL jacks. The 5-volts and GND pins should be connected as well.

    You may find the address of an attached I2C device by opening a terminal and entering the following command.

    sudo i2cdetect –y 1 or sudo i2cdetect –y 0

    After locating the I2C address, the next step is constructing the circuit and setting up the required libraries to use PCF8591 and a Raspberry Pi 4.

    Connecting the PCF8591 ADC/DAC Module to the Raspberry Pi 4

    The circuit diagram to interface the PCF8591 with the Raspberry Pi is straightforward. In this example of interfacing, we'll read the analog signal from any analog inputs and display them in the Raspberry Pi terminal. We have a 100K pot to adjust the settings.

    Pi's GPIO2 and GPIO must be connected to the power supply and ground. Then, hook up GPIO3 and GPIO5 to SDA and SCL, respectively. Last but not least, link AIN0 to a 100K pot. Instead of using the Terminal to view the ADC values, a 16x2 LCD can be added.

    The A/D Conversion Python Program

    The complete code and demo video are included after this guide.

    To communicate with the I2C bus, you must first import the SMBus library and then use the time library to specify how long to wait before outputting the value.

    import smbus

    import time

    Create some variables now. The I2C bus address is stored in the first variable, and the first analog input pin's address is stored in the second variable.

    address = 0x48

    A0 = 0x40

    Next, we've invoked the library smbus's SMBus(1) function to create an object.

    bus = smbus.SMBus(1)

    The first line in the while instructs IC to take a reading from the first analog signal pin. Address information read from an Analog pin is saved as a numeric variable in the second line. Exit with the value printed.

    While True:

        bus.write_byte(address,A0)

        value = bus.read_byte(address)

        print(value)

        time.sleep(0.1)

    Finally, put the Python script in a file ending in.py and run it in the Raspberry Pi terminal with the command below.

    python filename.py

    Ensure that the I2C communication is turned on and that the pins are linked according to the diagram before running the code, or else you will get errors. It's time for the analog readings to appear in the terminal format below. The values gradually shift as you turn the pot's knob. Find out more about getting the software to work in

    Here is the full Python script.

    import smbus

    import time

    address = 0x48

    bus = smbus.SMBus(1)

    while True:

        bus.write_byte(address,A0)

        value = bus.read_byte(address)

        print(value)

        time.sleep(0.1)

    ADC's Practical Uses

    We rely heavily on electronic gadgets in today's high-tech society. The digital signal is the driving force behind these digital devices. While most numbers are represented digitally, few still use analog notation. Thus, an ADC is employed to transform analog impulses into digital ones. ADC can be used in an infinite variety of contexts. Here are only a few examples of their use:

    • The digitized voice signal is used by cell phones. The voice is first transformed to digital form using an ADC before being sent to the cell phone's transmitter.

    • Digital photos and movies shot with a camera can be viewed on any computer or mobile device thanks to an analog-to-digital converter.

    • X-rays and MRIs are just two examples of medical imaging techniques that use ADC to go from Analog to digital before further processing. Then, they're adjusted so that everyone can follow along.

    • ADC converters can also transfer music from a cassette tape to a digital format, such as a CD or a USB flash drive.

    • The Analog-to-Digital Converter (ADC) in a digital oscilloscope converts analog signals to digital ones that can then be displayed and used for other reasons.

    • The air conditioner's built-in temperature sensors allow for consistent comfort levels. The onboard controller reads the temperature and makes adjustments based on the data it receives from the ADC.

    Nowadays, practically everything has a digital counterpart, so every gadget must also include an ADC. For the simple reason that its operations require a digital domain accessible only via an analog-to-digital converter (ADC).

    Conclusion

    This piece taught us how to connect a Raspberry Pi 4 to a PCF8591 Analogue - to - digital decoder module. We have observed the output being shown as integers on our Terminal. We have also researched how the ADC generates its output signals. Here we will use OpenCV and a Raspberry Pi 4 to create a social distance detector.

    Syed Zain Nasir

    I am Syed Zain Nasir, the founder of <a href=https://www.TheEngineeringProjects.com/>The Engineering Projects</a> (TEP). I am a programmer since 2009 before that I just search things, make small projects and now I am sharing my knowledge through this platform.I also work as a freelancer and did many projects related to programming and electrical circuitry. <a href=https://plus.google.com/+SyedZainNasir/>My Google Profile+</a>

    Share
    Published by
    Syed Zain Nasir