Introduction to Matrix in MATLAB

Hello, learners welcome to The Engineering Projects. We are working on MATLAB, and in this tutorial, you are going to learn a lot about matrices in MATLAB. We are going to learn them from scratch, but we will avoid unnecessary details about the topic. So, without wasting time, have a look at the topics that you will learn in detail.

  • What is an array?

  • What is the matrix?

  • How can we declare a matrix in MATLAB?

  • What are the different types of matrices?

  • Can we find the unknown values of two equal matrices?

  • How can we solve the simultaneous equation in MATLAB?

What is an Array?

In this world of technology, the use of data is everywhere, and therefore, we can say there is a need for arrays in every field. You will find the reason soon. But before this, look at the introduction of an array.

 An array is a simple data structure that contains a collection of data presented in contiguous memory locations.

So, the term “contiguous” used in the definition tells us that the data is in a continuous format, so we are not required to search here and there because the data is in a structured format. Moreover, arrays are of many kinds, such as

  • Two-dimensional arrays

  • Three-dimensional arrays

In different types of cases, the suitable array is picked up so that we may get the best result with limited memory occupancy. With this type of foundation concept, we can now move forward toward our main topic, which is matrices. 

What is a Matrix?

In real-life applications and in higher studies, matrices are used in plenty in different forms, and therefore, we have decided to talk about them from a very basic level since it is important to understand the key features of the topics we are studying. Moreover, matrices are introduced in early classes, and it is important to refresh the basics in our minds so that we may proceed to the more complex problems. Here is the definition of "matrix":

A matrix is a two-dimensional array in the form of an ordered rectangular array enclosed by a square bracket that has entries of the same kind in it in the form of real or complex data.

The plural of the matrix is matrices, and sometimes the rectangular bracket is replaced by the parentheses according to the case. Just look at the image given below:

This is a matric that contains nine elements, and you can also name this matric anything you want. In this way, it becomes easy to deal with more than one matrix, and you will see this action soon.

Order of a Matrix

To proceed forward, you must know the types of matrix and, for this, it is important to know the order of the matrix.

The matrix given above is a square matrix and the horizontal lines are called columns, whereas the vertical entries are termed the rows of that particular matrix.

If we represent the rows with the name m and the columns as n, then the order of the matrix is given as:

mxn

In this way, it is clear that the matrices given above have the order 3x4. If it seems to be an unnecessary thing to you, think again, because, with the help of order, we can know the type of a matrix and then perform different types of operations on it. But before this, have a look at some code in MATLAB to design matrices of different kinds. 

Code for the Simple Matrix

The Matrix is easily used in MATLAB, and you can start working with it by following the simple steps given below:

  • Start your MATLAB software.

  • Go to the command window.

  • Start writing the following code:

A=[23 14 -8 33; 17 -102 0 37;3 -31 98 4];

  • Press enter. 

In the image given overhead, these are the same entries that we have seen in the image given above, and in MATLAB, you will see the following result:

The square bracket is not shown on the sides of the array in MATLAB. As you can see, the semicolon after every three entries indicates that the row is completed and the MATLAB compiler has to start the other row. 

Here, A shows the name of the matrix that is compulsory, and you can name your matrix any word. If you do not follow the exact format and provide the number of entries different in rows, you will get the error. Once you know how to get started, you are ready to learn about the types of matrices.

Types of Matrices

There are several different types of matrices, and you can perform different arithmetic operations on the matrices only if they are of the same kind. This condition is not applied to all the operations, but most of them follow these rules. Here are some important types of matrices.

Row Matrix

A row matrix contains only one row and it is one of the simplest forms of a matrix. In this way, we get the matrix with a horizontal shape. The order of this matrix is:

mxn=1xn

Where n may be any number. 

Column Matrix

As you can guess, the column matrix is a type of matrix containing only one column and one or multiple rows. In this way, we get a matrix that has a vertical shape. Have a look at the order of a column matrix:

mxn=mx1

Where m may be any number, but the value of n is always one.

Square Matrix

A square matrix always has the number of rows and columns equal. It means, that no matter what the total number of entries is, the number of entries in each row and column must always be equal. In other words,

m=n

When you examine the example of a square matrix, you will get the reason why it is called so. The shape of this type of matrix is always square.

Rectangular Matrix

A rectangular matrix is one that has the arrangement of elements in such a way that the number of rows of the matrix is not equal to the number of columns. The same statement can be represented in the equation given next:

m!=n

Therefore, the matrix formed is in a rectangular shape, either in vertical format or horizontal format, according to the number of rows and columns.

Diagonal Matrix

We all know that the diagonal is the line or area that joins the upper left area with the lower right area of a rectangular or square. By the same token, a diagonal matrix is the one that contains all the diagonal values equal to zero and s in such a way that all the values other than the diagonal are zero. It will be clearer when you see the example of the diagonal matrix. We have set the examples of all the types of matrices that we have defined previously into a single MATLAB screen so you may have the best idea of each of them.

Code and Output

Moreover, here you can observe that instead of naming the matrices A, B, and so on, we have used the real names of the matrices for a clear declaration. Your homework is to make examples of each of them by yourself for the sake of practicing.

Finding the Unknown Values Between Two Matrices

Do you remember when we said the order of the matrix matters? This is one of the uses of an order of a matrix. Suppose we have two matrices named A and B, declaring that both are equal. This means that each corresponding value of a matrix A at position row 1 column 1 is equal to the corresponding value of the same position of matrix B. This is true for all the remaining values q of both matrices. Let me be clear with one example. Have a look at the picture given below:

So, the value of r and, in return, the value of all r variables in each entry can be easily obtained by following the rules of the equation. It is one of the simplest examples of doing so, but in real life, we face complex problems. So, we use MATLAB for simplicity and accurate results. Have a look at the MATLAB code where we are going to show you an application of you can easily solve the simultaneous equation in MATLAB as well. 

Solving Simultaneous Equations in MATLAB

By using the property of the matrix of equality in more than one matrix, we can easily solve the simultaneous equations that are difficult and time taking if we solve them by hand. So let's see how we can declare and solve the simultaneous equation in MATLAB.

Code:

syms x y

equa1= 6*x + 9*y==13;

equa2= 9*x + 6*y==12;

[A,B]= equationsToMatrix([equa1,equa2],[x,y])

z=linsolve(A,B)

Output:

Understanding the Code

To understand this code, you have to learn the basic definition of the function we have used in the code. It is the equationsToMatrix function. 

equationsToMatrix Function

The equationsToMatrix is a pre-defined function of MATLAB that converts the linear equation into a matrix so that we can use different operations on it more efficiently. It does it in the same way as we do in real life while solving the simultaneous equation with pen and paper. There are three types of syntax if this particular function. The one that we have used has the following syntax:

[A,b] = equationsToMatrix(eqns,vars)

Here, a minimum of two equations are required and the variables have the same condition. You must keep all the functions in mind and have to follow the exact syntax. Otherwise, it will show an error.

linsolve Function in MATLAB

In MATLAB, to solve the linear equation, we use this pre-defined function as it works in two ways:

  1. LU factorization with partial pivoting when in equation AB=X, A is a square. 

  2. QR factorization, otherwise.

In our case, it has used the QR factorization. Now, you are able to understand the code clearly. 

  • First of all, the syms sign tells MATLAB that we are defining the variables. These may be one or more. But, we wanted two variables here, and we named them x and y. 

  • Now, we simply provide the values of the equation to MATLAB and store both of them into variables named equa1 and equa2 respectively. 

  • The values of variables and equations are fed into the eqautionToMatrix function to convert the linear simultaneous equation into a matrix for easy solving. 

  • In the end, we simply named a matrix z and told MATLAB that we wanted the value of variables x and y.

Simultaneous Equation in MATLAB: Method 2

By the same token, we can use the other method that is similar to it but the way it solves the equation is a little bit different. 

Code:

syms x y

equa1= 6*x + 9*y==13;

equa2= 9*x + 6*y==12;sol=solve([equa1,equa2],[x,y])

asol=sol.x

 bsol=sol.y

Output:

Here, the only pone this is to understand. sol.x and sol.y are the functions that are used by the compiler to find the value of variables x and y respectively. You can use any variable with this sol function, after naming them at the beginning. After that, a variable is used to store and present the value of the answer obtained.

It was an interesting lecture about the matrix, and we worked a lot from scratch to the end on many topics. We have defined the arrays and seen the introduction of the matrix. We also found information about the types of matrices. Once we have a grip on the basics, we learn that a matrix can be used to find the unknown value of two matrices, and as an application of this method, we found the values of the variable by using linear equations and learned how to declare, use, and solve the linear equation with the help of matrices in MATLAB.

Stop Motion Movie System using Raspberry Pi 4

Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the previous tutorial, we built a motion sensor-based security system with an alarm. Additionally, we discovered how to use Twilio to notify the administrator whenever an alarm is triggered. However, in this tutorial, we'll learn how to build a stop motion film system using raspberry pi 4.

Where To Buy?
No.ComponentsDistributorLink To Buy
1BreadboardAmazonBuy Now
2Jumper WiresAmazonBuy Now
3Raspberry Pi 4AmazonBuy Now

What you will make

With a Raspberry Pi, Py, and a pi-camera module to capture images, you can create a stop-motion animated video. In addition, we'll learn about the various kinds of stop motion systems and their advantages and disadvantages.

The possibilities are endless when it comes to using LEGO to create animations!

What will you learn?

Using your RPi to build a stop motion machine, you'll discover:

  • How to install and utilize the picamera module on the RPi

  • This article explains how to take photos with the Picamera library.

  • RPi GPIO Pushbutton Connection

  • Operate the picamera by pressing the GPIO pushbutton

  • How to use avconv to create a video clip from the command prompt

Prerequisites

Hardware

  • Raspberry Pi 4

  • Breadboard

  • Jumper wires

  • Button

Software

It is recommended that FFmpeg comes preconfigured on the most recent release of Raspbian. If you don't have it, launch the terminal then type:

sudo apt-get update

sudo apt-get upgrade

sudo apt install FFmpeg

What is stop-motion?

Inanimate things are given life through the use of a sequence of still images in the stop-motion cinematography technique. Items inside the frame are shifted slightly between every picture to create the illusion of movement when stitched together.

You don't need expensive gadgets or Graphics to get started in stop motion. That, in my opinion, is the most intriguing aspect of it.

If you've ever wanted to learn how to make a stop-motion video, you've come to the right place. 

Types of stop-motion

  1. Object-Motion

Product Animation can also be referred to as the frame-by-frame movement of things. You're free to use any items around you to tell stories in this environment.

  1. Claymation

Changing clay items in each frame is a key part of the claymation process. We've seen a lot of clever and artistic figures on the big screen thanks to wires and clay.

  1. Pixilation Stop Motion

Making folks move! It is rarely utilized. For an artist to relocate just a little each frame, and the number of images you would need, you'll need a lot of patience and possibly a lot of money, if you're hiring them to do so.

The degree of freedom and precision with which they can move is also an important consideration. However, if done correctly, this kind can seem cool, but it can also make you feel a little dizzy at times.

  1. Cutout Animation

One can do so much with cuts in cutout motion because of this. two-dimensional scraps of paper may appear lifeless, yet you may color & slice them to show a depth of detail.

It's a lot of fun to play about with a cartoon style, but it also gives you a lot more control over the final product because you can add your graphics and details. However, what about the obvious drawback? I find the task of slicing and dicing hundreds of pieces daunting.

  1. Puppet Animation

Having puppets can be a fun and creative way to tell stories, but they can also be a pain in the neck if you're dealing with a lot of cords. However, this may be a challenge for professional stop motion filmmakers who are not the greatest choice to work with at first. These puppets are of a more traditional design.

When animators use the term "puppet" to describe their wire-based clay character, they are referring to claymation as a whole. Puppets based on the marionette style are becoming less popular.

  1. Silhouette Stop Motion

Position the items or performers behind a white sheet and light their shadows on the sheet with a backlight. Simple, low-cost methods exist for creating eye-catching animations of silhouettes.

How long does it take to make a stop-motion video?

The duration takes to create a stop-motion video is entirely dependent on the scale and nature of your project. Testing out 15- and 30-second movies should only take an hour or two. Because of the complexity of the scenes and the usage of claymation, stop-motion projects can take days to complete.

Connect the camera to the raspberry pi.

You must first attach the camera to the Pi before it can begin rebooting.

Next to Ethernet, find the camera port. Take a look at the top.

The blue side of the strip should face the Ethernet port when it is inserted into the connector. Push that tab downward while keeping the ribbon in place.

Try out the camera

Use the app menu to bring up a command prompt. The following command should be typed into the terminal:

libcamera-hello

If all goes well, you'll see a sneak peek of what's to come. What matters is that it's not upside-down; you can fix it afterward. To close the preview, hit Ctrl + C.

For storing an image on your computer, run the command below:

libcamera-jpeg -o test.jpg

To examine what files are in your root folder, type ls in the command line and you'll see test.jpg among the results.

Files and folders will be displayed in the taskbar's file manager icon. Preview the image by double-clicking test.jpg.

There is no default way to make Python Picamera work with Raspbian newest version.

To make use of the camera module, one must activate the camera's legacy mode.

The command below must be entered into a command window:

sudo raspi-config

When you get to Interface Options, hit 'Enter' on your keyboard to save your changes.

Ensure that the 'Legacy Camera option is selected then tap the 'Return' key.

Select Yes using the pointer keys and hit the 'Return' key.

Repeat the process of pressing 'Return' to verify.

Click on Finish with your mouse cursor buttons.

To restart, simply press the 'Return' key.

Py IDLE can be accessed from the menu bar.

While in the menu, click Files and then New Window to launch a Python code editor.

Paste the code below paying attention to the capitalization with care into the newly opened window.

from picamera import PiCamera

from time import sleep

camera = PiCamera()

camera.start_preview()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

Using the File menu, choose Save Animated film.

Use the F5 key to start your program.

You should be able to locate image.jpg on your desktop. It's as simple as clicking it twice to bring up a larger version of the image.

It's possible to fix an upside-down photo by either repositioning your picamera with a camera stand or by telling Python to turn the picture. Adding the following lines will accomplish this.

camera.rotation = 180

Once the camera is set to PiCamera(), the following is the result:

from picamera import PiCamera

from time import sleep

camera = PiCamera()

camera.rotation = 180

camera.start_preview()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

A fresh photo with the proper orientation will be created when the file is re-run. Do not remove these lines of code from your program when making the subsequent modifications.

Connect a physical button to a raspberry pi

Hook the Raspberry Pi to the pushbutton as illustrated in the following diagram with a breadboard and jumper wires:

Pushbutton may be imported at the beginning of the program, attached to pin17, and the sleep line can be changed to use the pushbutton as a trigger in the following way:

from picamera import PiCamera

from time import sleep

from gpiozero import Button

button = Button(17)

camera = PiCamera()

camera.start_preview()

button.wait_for_press()

camera.capture('/home/pi/image.jpg')

camera.stop_preview()

It's time to get to work!

Soon as the new preview has begun, press the pushbutton on the Pi to take a picture.

If you go back to the folder, you will find your image.jpg there now. Double-click to see the image once more.

Take a picture with Raspberry Pi 4

For a self-portrait, you'll need to include a delay so that you can get into position before the camera board takes a picture of you. Modifying your code is one way to accomplish this.

Before taking a picture, put in a line of code that tells the program to take a little snooze.

camera.start_preview()

button.wait_for_press()

sleep(3)

camera.capture('/home/pi/Desktop/image.jpg')

camera.stop_preview()

It's time to get to work.

Try taking a selfie by pressing the button. Keep the camera steady at all times! It's best if it's already mounted somewhere.

Inspect the photo in the folder once more if necessary. You can snap a second selfie by running the application again.

Things to consider for making a stop motion animation

  1. You must have a steady pi-camera!

This is made easier with the aid of a well-designed setup.  To avoid blurry photos due to camera shaking, you will most likely want to use a tripod or place your camera on a flat surface.

  1. Keep your hands away from the pi-camera

If you don't press the push button every time, your stop-motion movie will appear the best. To get the camera to snap a picture, use a wireless trigger.

  1. Shoot manually

Maintain your shutter speed, ISO, aperture, and white balance same for every photo you shoot. There are no "auto" settings here. You have the option of selecting and locking the app's configurations first. As long as your preferences remain consistent throughout all of your photos, you're good to go. The configurations will adapt automatically as you keep moving the items, which may cause flickering from image to image if you leave them on auto.

  1. Make sure you have proper lighting.

It's ideal to shoot indoors because it's easier to regulate and shields us from the ever-changing light. Remember to keep an eye out for windows if you're getting more involved. Try using a basic lighting setup, where you can easily see your items and the light isn't moving too much. In some cases, some flickering can be visible when you're outside of the frame. Other times the flickering works well with animation, but only if it does so in a way that doesn't disrupt the flow of the project.

  1. Frame Rate

You do not get extremely technical with this in the beginning, but you'll need to understand how many frames you'll have to shoot to achieve the series you desire. One sec of the film is typically made up of 12 images or frames. If your video is longer than a few secs, you risk seeming like a stop motion animation.

  1. Audio

When you're filming your muted stop motion movie, you can come up with creative ways to incorporate your sound later. 

Stop-motion video

The next step is to experiment with creating a stop motion video using a collection of still photos that you've captured with the picamera. Note that stills must be saved in their folder. Type "mkdir animation" in the command line.

When the button is pushed, add a loop to your program so that photographs are taken continuously.

camera.start_preview()

frame = 1

while True:

    try:

        button.wait_for_press()

        camera.capture('/home/pi/animation/frame%03d.jpg' % frame)

        frame += 1

    except KeyboardInterrupt:

        camera.stop_preview()

        break

Since True can last indefinitely, you must be able to gently end it. If you use Ctrl + C to force it to end, the picamera preview will collapse and the loop will be terminated because it is using try-except.

Files stored as "frame" with a three-digit number preceded by a leading zero (009,005.) are known as "frame" files because of the % 03d format. This makes it simple to arrange them in the proper sequence for the video.

To capture each following frame, simply push the button a second time once you've finished rearranging the animation's main element.

To kill the program, use Ctrl + C when all the images have been saved.

Your image collection can be viewed in the folder by opening the animation directory.

Create the video

To initiate the process of creating the movie, go to the terminal.

Start the movie rendering process by running the following command:

FFmpeg -r 10 -i animation/frame%03d.jpg -qscale 2 animation.mp4

Because FFmpeg and Py recognize the percent 03d formatting, the photographs are sent to the movie in the correct sequence.

Use vlc to see your movie.

vlc animation.mp4

The renderer command can be edited to change the refresh rates. Try adjusting -r 10 to a different value.

Modify the title of the rendered videos to prevent them from being overwritten. Modify animation.h264 to a different file to accomplish this.

What's the point of making stop motion?

Corporations benefit greatly from high-quality stop motion films, despite the effort and time it takes to produce them. One of these benefits is that consumers enjoy sharing these movies with friends, and their inspiring content can be associated with a company.  Adding this to a company's marketing strategy can help make its product extra popular and remembered.

When it comes to spreading awareness and educating the public, stop motion films are widely posted on social media. It's important to come up with an original idea for your stop motion movie before you start looking for experienced animators.

Stop Motion Movie's Advantages

In the early days of filmmaking, stop motion was mostly employed to give animated characters the appearance of mobility. The cameras would be constantly started and stopped, and the multiple images would all be put together to tell a gripping story.

It's not uncommon to see films employ this time-honored method as a tribute to the origins of animations. There's more, though. 

  1. Innovation

In the recent resurgence of stop motion animations, strange and amazing props and procedures have been used to create these videos. Filmmakers have gone from generating stop motion with a large sheet of drawings, to constructing them with plasticine figures that need to be manually manipulated millimeters at a time, and to more esoteric props such as foodstuffs, domestic objects, and creatures.

Using this technique, you can animate any object, even one that isn't capable of moving by itself. A stop-motion movie may be made with anything, thus the options are practically limitless.

  1. Animated Tutorials

A wide range of material genres, from educational films to comedic commercials, is now being explored with stop motion animation.

When it comes to creating marketing and instructional videos, stop motion animations is a popular choice due to their adaptability. An individual video can be created. 

Although the film is about five minutes long, viewers are likely to stick with it because of its originality.  The sophisticated tactics employed captivate the audience. Once you start viewing this stop motion video, it's impossible to put it down till the finish.

  1. Improve the perception of your brand

It's easy to remember simple but innovative animations like these. These movies can assist a company's image and later recall be more positive. Stop motion video can provoke thought and awe in viewers, prompting them to spread the creative message to their social networks and professional contacts.

It is becoming increasingly common for organizations of all kinds to include stop-motion animations in their advertisements. 

  1. In education 

Stop-motion films can have a positive impact on both education and business. Employees, customers, and students all benefit from using them to learn difficult concepts and methods more enjoyably. Stop motion filmmaking can liven up any subject matter, and pupils are more likely to retain what they've learned when it's done this way.

Some subjects can be studied more effectively in this way as well. Using stop motion films, for instance, learners can see the entire course of an experiment involving a slow-occurring reaction in a short amount of time.

Learners are given a stop motion assignment to work on as a group project in the classroom. Fast stop motion animation production requires a lot of teamwork, which improves interpersonal skills. Some learners would work on the models, while others might work on the backdrops and voiceovers, while yet others might concentrate on filming the scenes and directing the actors.

  1. Engage Customers and Employees

The usage of stop motion movies can be utilized to explain product uses rapidly, even though the application of the device and the output may take a while. You can speed up the timeline as much as you want in stop motion animations!

For safety and health demonstrations or original sales demonstrations, stop motion instructional films may also be utilized to effectively express complex concepts. Because of the videos' originality, viewers are more likely to pay attention and retain the content.

  1. Music Video

Some incredibly creative music videos have lately been created using stop motion animations, which has recently seen a resurgence in popularity.  Even the human body could be a character in this film.

Stop-motion animations have the potential to be extremely motivating. Sometimes, it's possible to achieve it by presenting things in a novel way, such as by stacking vegetables to appear like moving creatures. The sky's the limit when it comes to what you can dream up.

  1. Reaction-Inducing Video

When it comes to creating a stop motion movie, it doesn't have to be complicated. If you don't have any of these things in your possession, you'll need to get them before you can begin filming. However, if you want to create a professional-level stop motion film, you'll need to enlist the help of an animation company.

As a marketing tool, animated videos may be highly effective when they are created by a professional team. 

  1. Create an Intriguing idea

The story of a motion-capture movie is crucial in attracting the attention of audiences, so it should be carefully planned out before production begins. It should be appropriate for the video's intended audience, brand image, and message. If you need assistance with this, consider working with an animation studio.

Disadvantages

But there are several drawbacks to the overall process of stop motion filmmaking, which are difficult to overcome. The time it takes to create even a min of footage is the most remarkable. The time it takes to get this film might range from a few days to many weeks, depending on the approach used.

Additionally, the amount of time and work that is required to make a stop-motion movie might be enormous. This may necessitate the involvement of a large team. Although this is dependent on the sort of video, stop motion animating is now a fairly broad area of filmmaking, which can require many different talents and approaches.

Conclusion

Using the Raspberry Pi 4, you were able to create a stop-motion movie system. Various stop motion technologies were also covered, along with their advantages and disadvantages. After completing the system's basic functions and integrating additional components of your choice, you're ready to go on to the next phase of programming. Using raspberry pi 4 in the next article, we will build an LED cube.

Sequencer Output Instruction in PLC Ladder Logic Programming

Hi friends, today we are going to learn a good technique to run multi outputs in sequence. In another word, when we have some output that is repeatedly run in sequence. In the normal or conventional technique of programming we deal with them individually or one by one which takes more effort in programming and much space of memory. So instead we can use a new technique to trigger these outputs in sequence using one instruction which will save the effort of programming and space of memory. In this article, we are going to introduce how to implement sequencer output instruction. And practice some examples with the simulator as usual. Before starting the article, we need to mention that, some controllers like Allen Bradley have sequencer output instruction and some has not like Siemens. So we are going to give one example for each case showing how to code the equivalent to the sequencer output instruction in the PLCs that does not support this instruction.

Sequencer output instruction 

Figure one shows the block diagram of the process. The instruction takes the input data from the file, array, and data block and sequentially relays it to the outputs to trigger them sequentially.

Figure 2 shows the block of the sequencer output instruction showing input and output parameters. The file parameter is the first input parameter showing the address of the reference sequencer file. In addition, the mask input is to receive the address of the mask or the data block of which the instruction will move the data sequentially before relaying it to the output. Furthermore, the dest parameter is an output parameter that shows the address of the output to which the sequence bits will be applied. And the control parameter is the storage words for the status of the instruction, the file length, and the position of the pointer in the data file or array. Also, the length parameter holds the number of steps to move in the data file to the output destination sequentially. And position parameter holds the location of the pointer in the data file.


Block description and example of the sequencer output instruction

Figure 3 shows an instance of sequencer output instruction QSO. The QSO instruction steply moves through the input data file or array or data block and transfers sequentially the data bits from the data file to the output (destination word) through the mask word. The status of the instruction can be shown in the DONE (DN) bit. You should notice my friends that after the data transition is done the position is reset to the initial position or the first step.

Ladder Logic Example

Now guys, let us move to the ladder logic coding. So how does that sequencer output instruction work in ladder logic? Well!  Figure 4 shows a complete example of QSO instruction that is used in Allen Bradley to handle the sequencer output process, it shows one rung that has a start and stops push buttons from left to right at address I:0/0 and I:0/1 respectively to control the starting and stopping of the sequencer output processing. Next, you can see input I:0/2 which is used as a sequencer process flag to switch on or off the sequencer process. So, if the start PB is requested when no emergency stop and the sequencer on input is ON, the QSO is enabled and the data at address #B3:0 will be moved to dest at address O0:0 though the mask word at address 000Fh starting from position 0 with length 4.

Figure 5 shows the data file that the QSO uses to transfer sequence data bits to output. It shows the bits B3:0, B3:1, B3:2 & B3:3 are set to 1 for reference. So, when the sequencer ON input is set to high (I:0/2). The output Q:0/1 will be turned on based on the data in the data file shown in fig. 5. In that case, the length is 4 and the position is 1.

And when the sequencer flag I:0/2 is switched on next time, output O:0/2 will be switched ON. In that case, the length is 4 and the position is 2 as shown in Fig. 6.

In the third time, the sequencer flag is turned ON, the output O:0/3 will be turned ON and the length and position are updated to 4 and 3 respectively as shown in Fig. 7.

When it comes to the fourth time of switching the sequencer flag I:o/2, the output O:0/4 will be turned high and the position will be at 4 and length is 4 as shown in fig.8. At that time, the process is reset and position reset to 1.

The previous example shows how it is simple to control a bunch of outputs that are required to run in sequence with only one rung of the ladder program and using only one instruction which is QSO in Allen Bradley. This merit helps to save the memory space and time and efforts of programming and troubleshooting as well because the program will be shorter and more readable. However, still, some brands have not supported such instructions even the big names like siemens. That can not be counted as limitations but they are banking on there being a way to implement such logic. So, it is very beneficial for you guys to go implement together a piece of code (ladder logic) that is equivalent to such instruction for performing the function of sequencer output instruction in Siemens S7-1200 using our simulator. 

Ladder logic code for SQ0

As you guys see the sequencer output instruction is nothing but shifting the height value from right to left bit or right to left or even rotated shift for continuous operation. That drives our thinking to use the shift instructions in Siemens to perform this sequencer output instruction. 

Figure 9 shows the rungs of a ladder PLC simple program that implements the sequencer output process. See guys how lengthy the logic we have to code to do the same function of single instruction QSO in Allen Bradley. Again, that is a drawback or limitation thing but the program is more lengthy and takes more effort and also memory it is consuming that way. Moving to the logic listed in Fig. 9, the first thing is using a rotated shift instruction that shifts through the data block bit by bit and applies to the output QW0. At the same time increment instruction is used to move through the data. Also, one on delay timer is used to do some delay to show up the sequencing process of activating the output sequentially. And the end, a comparison instruction has been used to check if the pointer or the position reached the last output coil to reset to the first position and so on.

Simulating the sequencer output ladder code

Figure 10 shows the simulation of the sequencer output ladder code before activating the processes by using M0.0, it shows the position is at 1 and the output QW0 is all zeros. So let us activate the sequencer output process by set M0.0 to high and see the output.

Figure 11 shows the process after activating the sequencer program and starting to switch on outputs sequentially. The figure shows the process reached the sixth output coil and the position set to 7 to point at the next output. The process continues to tell reach the last one and then the position set the first step.

What’s Next???

I am glad to have you guys following tell that point and I hope you see the importance of the sequencer output technique in reducing the effort of programming and saving memory. Next time will take a tour with the bitwise logic operator and how they are utilized and how they work in the ladder program with given examples and for sure simulation to practice their usage. So let’s meet then and be ready.


Build a Twitter bot in Raspberry pi 4

Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we integrated a real-time clock with our raspberry pi four and used it to build a digital clock. However, In this tutorial, we will construct your personal Twitter bot using Tweepy, a Py framework for querying the Twitter application programming interface.

You will construct a Response to mentions robot that will post a response to everybody's tweet mentioning it with a certain keyword.

The response will be a photo we will make and put any text over it. This message is a quote you will acquire from a 3rd application programming interface. Finally, we will look at the benefits and drawbacks of bots.

This is what it looks like:

Where To Buy?
No.ComponentsDistributorLink To Buy
1Raspberry Pi 4AmazonBuy Now

Prerequisites

To continue through this guide, you'll need to have the following items ready:

An AWS account

Ensure you've joined up for Aws Beanstalk before deploying the finished project.

Twitter application programming interface auth credentials

To connect your robot to Twitter, you must create a developer account and build an app that Twitter provides you access to. 

Python 3

Python 3.9 is the current version, although it is usually recommended to use an edition that is one point behind the latest version to avoid compatibility problems with 3rd party modules. 

You have these Python packages installed in your local environment.

  • Tweepy — Twitter's API can be used to communicate with the service.

  • Pillow — The process of creating an image and then adding words to it

  • Requests — Use the Randomized Quote Generation API by sending HTTP queries.

  • APScheduler — Regularly arrange your work schedule

  • Flask — Develop a web app for the Elastic Beanstalk deployment.

The other modules you will see are already included in Python, so there's no need to download and install them separately.

Twitter application programming interface auth credentials

OAuth authentication is required for all requests to the official Twitter API. As a result, to use the API, you must first create the necessary credentials. The following are my qualifications:

  • consumer keys

  • consumers secret

  • access tokens

  • access secrets

Once you've signed up for Twitter, you'll need to complete the following steps to generate your user ID and password:

Step 1: Fill out an Application for a Developers Twitter Account

The Twitter developer’s platform is where you may apply to become a Twitter developer.

When you sign up for a developer account, Twitter will inquire about the intended purpose of the account. Consequently, the use case of your application must be specified.

To expedite the approval process and increase your chances of success, be as precise as possible about the intended usage of your product.

Step 2: Build an App

The verification will arrive in a week. Build an application on Twitter's developers portal dashboard after Twitter's developers account access has been granted.

Apps can only use authentication details; thus, you must go through this process. Twitter's application programming interface can be used to define an app. Information regarding your project is required:

  • Your project's name serves as its identifier.

  • Your project's category should be selected here. Choose "Creating a bot" in this scenario.

  • Your project's purpose or how users will interact with your app should be described in this section. 

  • The app's name: Finally, give your app a name by typing it in the box provided.

Step 3: The User Credentials should be created

To begin, navigate to Twitter's apps section of your account and create your user credentials. When you click on this tab, you'll be taken to a new page on which you can create your credentials.

The details you generate should be saved to your computer so they may be used in your program later. A new script called credentials.py should be created in your project's folder and contains the following four key-value pairs:

access_token="XXXXXXX"

access_token_secret="XXXXXXXX"

API_key="XXXXXXX"

API_secret_key="XXXXXXXX"

You can also test the login details to see if everything is functioning as intended using:

import tweepy

# Authenticate to Twitter

auth = tweepy.OAuthHandler("CONSUMER_KEY", "CONSUMER_SECRET")

auth.set_access_token("ACCESS_TOKEN", "ACCESS_SECRET")

api = tweepy.API(auth)

try:

    api.verify_credentials()

    print("Authentication Successful")

except:

    print("Authentication Error")

Authorization should be successful if everything is set up correctly.

Understand Tweepy

Tweepy is a Python module for interacting with the Twitter application programming interface that is freely available and simple. It provides a way for you to interact with the Application programming interface of your program.

Tweepy's newest release can be installed by using the following command:

pip install tweepy

Installing from the git repo is also an option.

pip install git+https://github.com/tweepy/tweepy.git

Here are a few of its most important features:

OAuth

As part of Tweepy, OAuthHandler class handles the authentication process required by Twitter. As you can see from the code above, Tweepy's OAuth implementation is depicted below.

Twitter application programming interface wrapper

If you'd want to use the RESTful application programming functions, Tweepy provides an application programming interface class that you can use. You'll find a rundown of some of the more popular approaches in the sections that follow:

  • Function for tweet

  • Function for user

  • Function for user timeline

  • Function for trend

  • Function for like

Models

Tweepy model class instances are returned when any of the application programming interface functions listed above are invoked. The Twitter response will be contained here. How about this?

user = api.get_user('apoorv__tyagi')

When you use this method, you'll get a User model with the requested data. For instance:

python print(user.screen_name) #User Name print(user.followers_count) #User Follower Count

Fetch the Quote

You're now ready to begin the process of setting up your bot. Whenever somebody mentions the robot, it will respond with a picture with a quotation on it.

So, to get the quote, you'll need to use an application programming interface for a random quotation generator. If you want to do this, you'll need to establish a new function in the tweetreply.py script and send a hypertext transfer protocol request to the application programming interface endpoint. Python's requests library can be used to accomplish this.

Using Python's request library, you can send hypertext transfer protocol requests. As a result, you could only fixate on the software's interactions with services and data consumption rather than dealing with the complex making of requests.

def get_quote():

    URL = "https://api.quotable.io/random"

    try:

        response = requests.get(URL)

    except:

        print("Error while calling API...")

This is how they responded:

The JSON module can parse the reply from the application programming interface. You can use import JSON to add JSON to your program because it is part of the standard libraries.

As a result, your method returns the contents and author alone, which you will use. As you can see, here's how the whole thing will work.

def get_quote():

    URL = "https://api.quotable.io/random"

    try:

        response = requests.get(URL)

    except:

        print("Error while calling API...")

    res = json.loads(response.text)

    return res['content'] + "-" + res['author']

Generate Image

You have your text in hand. You'll now need to take a picture and overlay it with the text you just typed.

The Pillow module should always be your first port of call when working with images in Python. The Python Pillow imaging module provides image analysis and filetypes support, providing the interpreter with a strong image processing capacity.

Wallpaper.py should be created with a new function that accepts a quote as the argument.

def get_image(quote):

    image = Image.new('RGB', (800, 500), color=(0, 0, 0))

    font = ImageFont.truetype("Arial.ttf", 40)

    text_color = (200, 200, 200)

    text_start_height = 100

    write_text_on_image(image, quote, font, text_color, text_start_height)

    image.save('created_image.png')

Let's take a closer look at this feature.

  • Image.new() A new photo is created using the given mode and size. The first thing to consider is the style used to generate the new photo. There are a couple of possibilities here: RGB or RGBA. Size is indeed the second factor to consider. The width and height of an image are given as tuples in pixels. The color of the background image is the final option (black is the default color).

  • ImageFont.TrueType() font object is created by this method. It creates a font object with the desired font size using the provided font file. While "Arial" is used here, you are free to use any other font if you so like. Font files should be saved in the project root folder with a TrueType font file extension, such as font.ttf.

  • In other words, the text's color and height at which it begins are specified by these variables. RGB(200,200,200) works well over dark images.

  • Image. Save () created png image will be saved in the root directory due to this process. It will overwrite any existing image with the same name that already exists.

def write_text_on_image(image, text, font, text_color, text_start_height):

    draw = ImageDraw.Draw(image)

    image_width, image_height = image.size

    y_text = text_start_height

    lines = textwrap.wrap(text, width=40)

    for line in lines:

        line_width, line_height = font.getsize(line)

        draw.text(((image_width - line_width) / 2, y_text),line, font=font, fill=text_color)

        y_text += line_height

A message will be added to the image using the following method in the same script, Wallpaper.py. Let's take a closer look at how this feature works:

  • Create two-dimensional picture objects with the ImageDraw package.

  • A solitary paragraph is wrapped in texts using text wrap. Wrap () ensures that each line is no more than 40 characters in length. Output lines are returned in a tally form.

  • Draw. Text () will draw a text at the provided location. 

Use parameter:

  • XY — The text's upper-left corner.

  • Text — The text to be illustrated.

  • Fill — The text should be in this color.

  • font — One of ImageFont's instances

This is what Wallpaper.py look like after the process:

from PIL import Image, ImageDraw, ImageFont

import text wrap

def get_wallpaper(quote):

    # image_width

    image = Image.new('RGB', (800, 400), color=(0, 0, 0))

    font = ImageFont.truetype("Arial.ttf", 40)

    text1 = quote

    text_color = (200, 200, 200)

    text_start_height = 100

    draw_text_on_image(image, text1, font, text_color, text_start_height)

    image.save('created_image.png')

def draw_text_on_image(image, text, font, text_color, text_start_height):

    draw = ImageDraw.Draw(image)

    image_width, image_height = image.size

    y_text = text_start_height

    lines = textwrap.wrap(text, width=40)

    for line in lines:

        line_width, line_height = font.getsize(line)

        draw.text(((image_width - line_width) / 2, y_text),line, font=font, fill=text_color)

        y_text += line_height

Responding to Mentions by Keeping an Eye on the Twitter Feed.

You've got both the quote and an image that incorporates it in one. It's now only a matter of searching for mentions of you in other people's tweets. In this case, in addition to scanning for comments, you will also be searching for a certain term or hashtags.

When a tweet contains a specific hashtag, you should like and respond to that tweet.

You can use the hashtag "#qod" as the keyword in this situation.

Returning to the tweet reply.py code, the following function does what we want it to:

def respondToTweet(last_id):

    mentions = api.mentions_timeline(last_id, tweet_mode='extended')

    if len(mentions) == 0:

        return

    for mention in reversed(mentions):

        new_id = mention.id

        if '#qod' in mention.full_text.lower():

            try:

                tweet = get_quote()

                Wallpaper.get_wallpaper(tweet)

                media = api.media_upload("created_image.png")

                api.create_favorite(mention.id)

                api.update_status('@' + mention.user.screen_name + " Here's your Quote", 

                      mention.id, media_ids=[media.media_id])

            except:

                print("Already replied to {}".format(mention.id))

  • Respond to tweet() The last id is the function's only argument. Using this variable, you can only retrieve mentions produced after the ones you've previously processed. Whenever you initially invoke the method, you will set its value to 0, and then you'll keep updating it with each subsequent call.

  • mentions_timeline() Tweets are retrieved from the Tweepy module using this function. Only tweets with the last id newer than the provided value will be returned using the first parameter. The default is to show the last 20 tweets. When tweet mode='extended' is used, the full uncut content of the Tweet is returned. Text is shortened to 140 characters if the option is set to "compat."

Create favorite() is used to generate a favorite for every tweet that mentions you in reverse chronological order, starting with the earliest tweet first and working backward from there.

In your case, you'll use update status() to send a reply to this message, which includes the original tweet writer's Twitter handle, your textual information, the original tweet's identification, and your list of multimedia.

To Prevent Repetition, Save Your Tweet ID

There are several things to keep in mind when repeatedly responding to a certain tweet. Simply save the tweet's identification to which you last answered in a text document, tweetID.txt; you'll scan for the newer tweet afterward. The mention timeline() function will take care of this automatically because tweet IDs can be sorted by time.

Now, you'll pass a document holding this last identification, and the method will retrieve the identification from the document, and the document will be modified with a new one at the end.

Finally, here is what the method response to tweet() looks like in its final form:

def respondToTweet(file):

    last_id = get_last_tweet(file)

    mentions = api.mentions_timeline(last_id, tweet_mode='extended')

    if len(mentions) == 0:

        return

    for mention in reversed(mentions):

        new_id = mention.id

        if '#qod' in mention.full_text.lower():

            try:

                tweet = get_quote()

                Wallpaper.get_wallpaper(tweet)

                media = api.media_upload("created_image.png")

                api.create_favorite(mention.id)

                api.update_status('@' + mention.user.screen_name + " Here's your Quote", 

                      mention.id, media_ids=[media.media_id])

            except:

                logger.info("Already replied to {}".format(mention.id))

    put_last_tweet(file, new_id)

You'll notice that two additional utility methods, get the last tweet() and put the last tweet(), have been added to this section ().

A document name is required for the function to get the last tweet(); the function putlasttweet() requires the document as a parameter, and it will pick the most recent tweet identification and modify the document with the latest identification.

Here's what the final tweet reply.py should look like after everything has been put together:

import tweepy

import json

import requests

import logging

import Wallpaper

import credentials

consumer_key = credentials.API_key

consumer_secret_key = credentials.API_secret_key

access_token = credentials.access_token

access_token_secret = credentials.access_token_secret

auth = tweepy.OAuthHandler(consumer_key, consumer_secret_key)

auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

# For adding logs in application

logger = logging.getLogger()

logging.basicConfig(level=logging.INFO)

logger.setLevel(logging.INFO)

def get_quote():

    url = "https://api.quotable.io/random"

    try:

        response = requests.get(url)

    except:

        logger.info("Error while calling API...")

    res = json.loads(response.text)

    print(res)

    return res['content'] + "-" + res['author']

def get_last_tweet(file):

    f = open(file, 'r')

    lastId = int(f.read().strip())

    f.close()

    return lastId

def put_last_tweet(file, Id):

    f = open(file, 'w')

    f.write(str(Id))

    f.close()

    logger.info("Updated the file with the latest tweet Id")

    return

def respondToTweet(file='tweet_ID.txt'):

    last_id = get_last_tweet(file)

    mentions = api.mentions_timeline(last_id, tweet_mode='extended')

    if len(mentions) == 0:

        return

    new_id = 0

    logger.info("someone mentioned me...")

    for mention in reversed(mentions):

        logger.info(str(mention.id) + '-' + mention.full_text)

        new_id = mention.id

        if '#qod' in mention.full_text.lower():

            logger.info("Responding back with QOD to -{}".format(mention.id))

            try:

                tweet = get_quote()

                Wallpaper.get_wallpaper(tweet)

                media = api.media_upload("created_image.png")

                logger.info("liking and replying to tweet")

                api.create_favorite(mention.id)

                api.update_status('@' + mention.user.screen_name + " Here's your Quote", mention.id,

                                  media_ids=[media.media_id])

            except:

                logger.info("Already replied to {}".format(mention.id))

    put_last_tweet(file, new_id)

if __name__=="__main__":

    respondToTweet()

Deploy the bot to Server

In order to complete the process, you will need to upload your program to a server. Python applications can be deployed using AWS Elastic Beanstalk in this area.

Amazon web service simplifies management while allowing for greater flexibility and control. Your application is automatically provisioned with capacity, load-balanced, scaled and monitored for health using Elastic Beanstalk.

Here is how it's going to work out:

  • Install Python on the AWS  environment

  • Build a basic Flask app for the bot

  • Connect to AWS and deploy your Flask app

  • Use logs to find and fix bugs

Set up Elastic Beanstalk environment

After logging into the Aws services account, type and pick "Elastic Beanstalk," then click "setup a New App."

You'll be asked to provide the following information:

  • Name of the application; 

  • Application's tags; 

  • Environment;

  • Code of the application

Each AWS Elastic Beanstalk application resource can have up to 50 tags. Using tags, you may organize your materials. The tags may come in handy if you manage various AWS app resources.

Platform branches and versions are automatically generated when Python is selected from the selection for the platform.

Later, you will deploy your app to elastic Beanstalk. Select "sample app" from the drop-down menu and click "new app." For the most part, it should be ready in about a minute or two

Create a Flask app

Python is used to create Flask, a website development framework. It's simple to get started and use. Flask has no dependencies, making it a more "beginner-friendly" framework for web applications.

Flask has several advantages over other frameworks for building online applications, including:

  • Flask comes with a debugger and a development server.

  • It takes advantage of Jinja2's template-based architecture.

  • It complies with the WSGI 1.0 specification.

  • Unit testing is made easier with this tool's built-in support.

  • Flask has a plethora of extensions available for customizing its behavior.

Flask as a micro-framework

It is noted for being lightweight and simply providing the needed components. In addition to routing, resource handling, and session management, it includes a limited set of website development tools. The programmer can write a customized module for further features, such as data management. This method eliminates the need for a boilerplate program that isn't even being executed.

Create a new Python script and call it application.py, then paste the code below into it while AWS creates an environment.

from flask import Flask

import tweet_reply

import atexit

from apscheduler.schedulers.background import BackgroundScheduler

application = Flask(__name__)

@application.route("/")

def index():

    return "Follow @zeal_quote!"

def job():

    tweet_reply.respondToTweet('tweet_ID.txt')

    print("Success")

scheduler = BackgroundScheduler()

scheduler.add_job(func=job, trigger="interval", seconds=60)

scheduler.start()

atexit.register(lambda: scheduler.shutdown())

if __name__ == "__main__":

    application.run(port=5000, debug=True)

Use up scheduler and a flask app to execute a single job() function that will ultimately call the main method in the tweet reply.py script on a minute basis.

As a reminder, the object instance's identifier of the flask app must be "app." For Elastic Beanstalk to work with your application, you must give it the correct name.

Deploy and set up the app to Amazon Web Services.

Your online app's code can include Elastic Beanstalk conf files (.ebextensions) for configuring amazon web services resources and the environments.

The .config script extension is used for YAML files, and these are put in the .ebextensions directory together with the app's code during the deployment process.

Establish a new directory called .ebextensions inside the code folder and add a new file called Python .config. Add the following code:

files:

  "/etc/httpd/conf.d/wsgi_custom.conf":

    mode: "000644"

    owner: root

    group: root

    content: WSGIApplicationGroup %{GLOBAL}

If you want Elastic Beanstalk to tailor its settings to the app's prerequisites, you'll need to include a list of any external libraries inside a requirements.txt script you produce.

Execute the command below to generate the requirements.txt file using pip freeze

Finally, package up everything for uploading on Elastic Beanstalk with Elastic Beanstalk. The architecture of your project directory should now look like this:

Compress all the files and directories listed here together. Open amazon web services again and select Upload Code.

Once you've selected a zip archive, click "Deploy." When the health indicator becomes green, your app has been successfully launched. "Follow @zeal quote!" if all of the above steps have been followed correctly, they should appear on your website link.

Procedure for getting an error report in the system

The following steps will help you access the reports of your app in the event of an error:

  • Logs can be seen under the "Environment" tab in the Dashboard.

  • After choosing "Request Log," you'll be taken to a new page with an options list. The last lines option is for the latest issues, but the "full log" option can be downloaded if you need to troubleshoot an older error.

  • To see the most recent log line, click "Download," A new web page will open.

    The Benefits and Drawbacks of Twitter Autonomy

    Media platforms entrepreneurs benefit greatly from automation, which reduces their workload while increasing their visibility on Twitter and other media platforms. We may use various strategies to ensure that we're always visible on Twitter.

    The benefits of automation are numerous. 

    There is still a need for human intervention with any automated process.

    However, automation should only be a minor element of your total plan. An online presence that is put on autopilot might cause problems for businesses. If your campaign relies on automation, you should be aware of these problems:

    Appearing like a robot

    Engaging others is all about being yourself. The tweet was written by a person who was using a phone to produce it, based on the bad grammar and occasional errors. Those who aren't in the habit of writing their own Twitter tweets on the fly risk seeming robotic when they send out several automated messages. Tweets written in advance and scheduled to post at specific times appear disjointed and formulaic.

    It is possible to appear robotic and dry if you retweet several automated messages. If your goal is to promote user interaction, this is not the best option.

    The solution: Don't automate all of your messages. The platform can also be used for real-time interaction with other people. Whenever feasible, show up as yourself at gatherings.

    Awful Public Relations Fumbles

    When you plan a message to go out at a specific time, you have no idea what will be trending. If a tragic tale is trending, the tweet could be insensitive and out of context. On Twitter, there is a great deal of outrage. Because everyone is rightly concerned about their collective destiny, there is little else to talk about.

    Then, in a few hours, a succession of your tweets surface. Images showing the group having a great time in Hawaii.

    While it's understandable that you'd want to avoid coming across as uncaring or unaware in this day and age of global connectivity and quick accessibility of info from around the globe, it's also not a good look. Of course, you didn't mean it that way, but people's perceptions can be skewed.

    What to do in response to this: Automatic tweets should be paused when there is a major development such as the one above. If you're already informed of the big news, it's feasible, but it may be difficult due to time variations.

    Twitter automation allows your messages to display even if you are not into the service. Your or your company's identity will remain visible to a worldwide audience if you have a global target market.

    If an automatic tweet appears before you can brush up on the latest events in your location, follow it up with a real one to show your sympathy. People find out about breaking news through Twitter, a global platform. Few of us have the luxury of remaining in our small worlds. While it's excellent to be immersed in your company's day-to-day operations, it's also beneficial to keep up with global events and participate in Twitter's wider discussion.

    Absence of Reaction

    People respond to your automatic tweet with congratulations, questions, or pointing out broken links that go unanswered because you aren't the one publishing it; a program is doing it in your stead, not you. Awkward.

    Suppose something occurs in the wee hours of the morning. Another tweet from you will appear in an hour. After seeing the fresh tweet, one wonders if Mr. I-Know-It-All-About-Social-Media has even read his reply.

    What to do in response to this situation: When you next have a chance to log on, read through the comments and answer any that have been left. Delayed responses are better than no responses. Some people don't understand that we're not all connected to our Twitter 24 hours a day.

    Damage to the reputation of your company

    As a means of providing customer support, Twitter has become increasingly popular among businesses. It's expected that social media queries will be answered quickly. Impatience breeds on the social web since it's a real-time medium where slow responses are interpreted as unprofessionalism.

    On the other hand, Automatic tweets offer the idea that businesses are always online, encouraging clients to interact with the company. Customers may think they're being neglected if they don't hear back.

    When dealing with consumer issues, post the exact hours you'll be available.

    Vital Comments Left Unanswered

    As soon as somebody insults you, the business, or even just a tweet, you don't want to let those unpleasant feelings linger for too long. We're not referring to trolls here; we're referring to legitimate criticism that individuals feel they have the right to express.

    What should you do? Even though you may not be able to respond immediately, you should do so as soon as you go back online to limit any further damage.

    Inappropriate actions like Favoriting and DMing might be harmful.

    Individuals and organizations may use IFTTT recipes to do various tasks, like favorite retweets, follow back automatically, and send automated direct messages.

    The unfortunate reality is that automation cannot make decisions on its own. In light of what people may write unpredictably, selecting key phrases and establishing a recipe for a favorite tweet that includes those terms, or even postings published by certain individuals, may lead to awkward situations.

    Spam firms or individuals with shady history should not be automatically followed back. Additionally, Twitter has a cap on the number of followers you can follow at any given time. Spammy or pointless Twitter individuals should not be given your followers.

    What should you do? Make sure you are aware of what others are praising under your name. Discontinue following anyone or any company that does not exude confidence in your abilities. In our opinion, auto-DMs can work if they are personalized and humorous. Please refrain from including anything that can be found on your profile. They haven't signed up for your blog's newsletter; they've just become one of your Twitter followers. Take action as a result!

    Useful Benefits

    Smaller companies and busy people can greatly benefit from Tweet automation. As a result of scheduling Twitter posts, your workload is reduced. A machine programmed only to do certain things is all it is in the end. But be careful not to be lulled into complacency.

    Social media platforms are all about getting people talking. That can’t be replaced by automation. Whether you use automation or not, you must always be on the lookout for suspicious activity on your Twitter account and take action as soon as you notice it.

    Conclusion

    In this article, you learned how to build and publish a Twitter robot in Py.

    Using Tweepy to access Twitter's API and configuring an amazon web service Elastic Beanstalk environment for the deployment of your Python application were also covered in this tutorial. As part of the following tutorial, the Raspberry Pi 4 will be used to build an alarm system with motion sensors.

    Master Reset Control in Ladder Logic Programming

    Introduction 

    Hello friends, I hope you are doing very well. Today we are going to learn and practice the master control reset (MCR)! So what is that MCR? Well! This is a tool you might use to control a group of devices with one push button for performing fast emergency responses with one click for a group of devices in one zone. In another word, you divide the program into zones and put this zone between a master control to control their operation as one unit by one contact. This technique is useful for applying emergence stops and also protecting some equipment by applying a safety restriction to not operate when that condition is in effect. 

    The concept of the master control reset (MCR)

    Figure 1 shows the master control relay in a ladder logic showing a couple of rungs between the master control and master control reset to be controlled as one zone by master control. for example, input 1 enables the master control relay M100 which is the only way to relay the hotline of power to rungs 2 and 3 as shown in the figure. When input 1 is on, the master control relay is energized. Therefore, input 2 and input 3 can energize output 1 and output 2 respectively. But, if input 1 is off, the master control relay is off. Therefore, rungs 2 and 3 are disconnected from the power. Therefore, even if input 2 and input 3 are on, outputs 1 and 2 will not energize because of a missing connection to power via master control relay M100. To sum up, there is a zone that contains a couple of rungs, these rungs are not enabled without master control enabled. Also, fig. 1 shows the structure of the master control and master control reset to have one rung to enable the master control relay and one rung at the end to represent the master control reset and declare the end of the zone that is under master control. and the code or rungs that are located between the master control and the master control reset is the zone that we need to control its running based on master control. So, if the master control is not true, the code in that zone between the master control and the master control reset will be bypassed and the next instruction after the master control reset instruction will be executed. 

    The concept looks very simple but it is crucial for safety and control techniques. Also, many master control and zones can have in our program as shown in fig. 2. It shows more than one zone and each one is controlled by its master control.

    So we want to go further in demonstrating the master control reset by a practical example from real life. Figure 3 shows a practical example of real-life in the industry of which automatic bottle filling process. So what does master control have to do with such a process? Will! That is a good question because it tells me you understand master control reset and are with me on the same page. So, as you can see, there is the start and stop pushbuttons and we need to use master control to control starting and stopping the whole process regardless of the status of individual inputs and sensors. by having such control, we can stop the process in any emergency case or for doing maintenance. The sequence of the process is to start moving the conveyor by hitting the start push button. The conveyor keeps running until the proximity sensor comes ON. At that time, the valve will open for 5 seconds and then the conveyor continue moving again after 5 seconds and continue for 3 bottles repeating the same process. But if there is an emergency happens, there should be a way to stop the process including moving the conveyor, and opening the valve even if all conditions to do are met. Well done! You are correct, master control and master control reset should bracket the process to be enabled and disabled when that emergency comes to happen.

    Master control in ladder logic

    Master control and master control reset are the same concepts. However, a few differences you can notice in ladder logic from one plc brand to another. For example, Fig. 4 shows the ladder logic code of a master control reset in Mitsubishi PLC. You can notice the same concept has been applied. A zone of a couple of rungs is surrounded by master control and master control reset instructions based on master control relay M100. Input 1, X400 enables the master control relay M100. And rungs 2 and 3 are included in the zone under master control.

    On the other hand, master control and master control reset look a little bit different in Allen-Bradley as shown in fig.5. However, you can notice the same concept is applied by having the zone that includes a couple of rungs between the master control relay and master control reset for enabling or disabling that zone based on the logic and situation.

    Also, Siemens shows a few differences in ladder logic of master control as shown in Fig.6. however, the same concept is thereby enclosing the code to be controlled in a zone preceded by enabling to master control relay and followed by a master control reset to clear that master control and show the end of the controlled zone.

    Practice Ladder logic example

    Guys, it is now the time to enter our lab and enjoy practicing the master control and master control reset by using our simulator as usual for validating our understanding of what we have gone through in this tutorial on ladder logic programming. In the example simulated below and shown in fig. 7, we have designed simple master control and master control rest to have master control of running of Q0.0. you can notice that, despite input I0.1 being true, Q0.0 is not energized because master control is not enabled or in off status. So what happens if we enable master control by switching on input I0.0?

    Yes, you are correct! The output Q0.0 will now work after enabling the master control by turning input I0.0 on as shown in Fig.8.

    What’s next???

    Let me thank you guys for following up until this point and because your knowledge of ladder logging is getting increase every single tutorial, I would like to announce that, the next tutorial will be about one of the very advanced levels of ladder logic programming which is for expert and I thought you are now. The sequencer output instruction in ladder logic is our topic for the next tutorial in which we will learn and practice how to, massively output data sequentially to outputs. Please do not worry if that is not clear for now and just be there to go through it step by step and enjoy practicing one topic for an expert ladder programmer.

    Introduction to Z Transform in Signal and Systems with MATLAB

    Hey learners! Welcome to another exciting lecture in this series of signals and systems. We are learning about the transform in signals and systems, and in the previous lectures, we have seen a bulk of information about the Laplace transform. If you know the previous concepts of signal and system, then this lecture will be super easy for you to learn, but if your concepts are not clear, do not worry because we will revise all the basic information from scratch for our new readers. So, without wasting time, have a look at the topics of today.

    1. What is z transform? 

    2. What is the region of convergence of z transform?

    3. What are some of the important properties of the region of convergence?

    4. How to solve the z transform?

    5. What is an example of the z transform in MATLAB?

    6. What are the methods for the inverse z transform?

    What is z transform?

    You must have an idea that the Laplace transform is a general case transform and converts the signal from a time domain into a frequency domain. The same is the case with the z transform, it changes the domain of the signal but in another way. The Laplace transform is associated with the power signal and the z transform has some other characteristics. Usually, the z transform is used to understand the stability of the system. Z transforms are used in

    • Energy signals

    • Power signals

    • Neither power nor energy signals

    Yet it is applicable to a certain level, and after that level, it is not that much more effective. The Z transform is a powerful mathematical tool and is widely used in mathematical courses and applications including signals and systems. We introduce the z transform as:

    "The Z transform is a mathematical tool that, after different procedures, converts the differential equation (with time domain) into the algebraic equation of the z domain."

    Mathematically, we show it as:

    As you can see clearly, the summation sign contains the limits from negative infinity to positive infinity which means it is a bilateral function of the z transform that we have shared with you. 

    By the same token, you can also predict that the z transform also has another region that lies between the zero to positive infinity. It is called the unilateral z transform and it works a little bit differently than the first case discussed here. We describe it as:

    Let’s discuss the whole statement in detail but prior to starting, recall in your mind the definition of discrete-time signals. We know that:

    “A discrete-time signal is one whose value can be measured at discrete intervals." When working with a discrete-time signal, there will be time intervals of n during which you will not have a value. In the representation of DT signals, the form x[n] is used. Discrete signals approximate continuous-time (CT) signals."

    Therefore, when talking about the z transform, keep the definition of z transform in your mind to understand what exactly is going on. 

    Now, look at the statement given above.

    • We have the discrete-time signal in the time domain represented by x[t]. 

    • We applied the z transform to it. 

    • Now, we got the same signal that lies in the z domain and is multiplied with a complex number z having the exponential of the n with a negative sign. 

    • Do not worry about the value of n all the time. The summation sign indicates that the value of n starts from negative infinity (or zero in unilateral z transform) to positive infinity, and in this way, we found the values of the series. (This will be much clearer when we discuss the procedure).

      Here z is a complex number that is described with the help of this equation:

      At this level, there is no need to describe the basic concepts of complex numbers. In general, we can summarize the above discussion through the statement given next:

      x(n)⟷X(Z)

      The whole discrete time signal is converted into another format with the z transform as a base,

      Region of Convergence of z Transform

      The region of convergence, or simply ROC, is an important term that is used to describe the range of z in the z transform. As we have said, z is a complex number and its range, therefore, has different types of properties. 

      Properties of the Region of Convergence

      • No. of poles: In z transform, where x[z] is always finite, there are no poles. (We’ll define them in a later section). The ROC of the z transform does not contain any poles. 

      • When talking about a right-sided signal, the region of convergence is always outside the z-plane circle.

      Where the shaded area is ROC.


      • When talking about a left-sided signal, the region of convergence is always inside the z-plane circle.

      Where the shaded area is ROC.

      • If we have the signal on both sides, then the region of ROC is in the form of a ring inside the circle. 

      • To have a stable form, the region of convergence has a unit value.

      Procedure to Solve z Transform

      There are different cases in the z transform that if we start to describe, will be more confusing and time taking than other transforms. So, we’ll discuss a general format of z transform, and after practice, you will learn the procedure effectively. 

      Thoroughly examine the question. 

      Use the formula of the z transform given above.

      • Put z into the denominator as it has negative power. Doing so will convert the negative power into a positive. 

      • Make sure you take the common values out of the sequence. 

      • Put the value of n as 0, 1, 2, 3, 4, and so on to get the series. 

      • Solve the equation.

      It is the most basic and simple description, but the z transform is not that simple. Instead of solving the long calculations, why not use MATLAB for practical implementation? If you have your own course instructor at university, you must have the idea of solving the procedure by hand. 

      Z transform in MATLAB

      In MATLAB, the z transform is as simple as the previous transform was. Therefore, we are emphasizing the usage of MATLAB at every step. Note that if you want to have a detailed procedure to perform the functions theoretically, you can find your instructors. But from the performance point of view, you should go to MATLAB to confirm your answers all the time. Here is a simple example of an equation that also shows some little details. 

      Code:

      syms n;

      f=(2*n+5)/3

      disp('x[n]=')

      disp(f)

      ans=ztrans(f)

      disp('z[n]')

      disp(F)


      Output:

      Here, you can see we have used the pre-defined function of z transform given as:

      ztrans(x)

      With the help of the z transform, you can solve any equation or expression that you want. 

      Notice that we have used a display function in the code. You must know that z transform can also be done in MATLAB without using the function. 

      Display function in MATLAB

      The display function is used to label the numerical output. It does the same work as the xlabel and ylabel in the graphical window of MATLAB. Moreover, this function is also used to call the value that we have specified before using it and to display the results directly. The syntax of the display function is

      1. disp(x)

      2. disp(‘x’)

      Where,

      • To display the string value, we use inverted commas around the value. 

      • To call the value of x, we simply use it as it is. 

      • Never use this function with a capital D or any other change in the spelling, otherwise, you will get the error. 

      Have a look at another example of the z transform in which we added two trigonometric functions and found their z transform.

      Code:

      syms n;

      f=sin(2*n)

      ans=ztrans(f)

      g=cos(3*3.14*n)

      ans=ztrans(g)

      Output:

      Zaros and poles of z transform

      When you study this particular topic of z transform, you will hear some terms named as zeros and poles. These are the simple topics while learning z transform and usually, are used in the numerical problem. Consider the following equation:

      Zeros: while solving the equation with the fraction, the numerator contains the M roots corresponding to the zeros. In the equation given above, q represents the zeros.

      Poles: When you are solving the fractional equation by z transform, the N roots in the denominator are called the poles. Here, these are represented with the letter p.

      While solving the z transform, we make a circular representation to solve the equation, just like we did while we were learning ROC. Poles are represented at the y-axis, and zeros are represented at the x-axis.

      Inverse z Transform

      As you can guess from the name, the inverse z transform is used to convert the results of the z transform into the form before the z transform. There are different methods through which the calculations of the z transform are inverted from an equation. 

      1. Long division 

      2. The partial fraction method of inverse z transforms

      3. Residue method

      4. Power series of inverse z transform

      Long Division Method 

      This method is applicable when:

      • There is a ratio of two polynomials

      • The polynomials can not be factorized

      To apply inverse z transform with this method, the division process of numerator with denominator is carried out. The resultant equation is then passed through the procedure of inverse z transform and we get the results. It is a complex method. 

      Partial Fraction Method of Inverse z Transform

      This is the method that involves the rules and procedure of partial fraction (we have received it in the previous lecture) and after that, the inverse z transform is applied. 

      Residue Method

      There are different names for this method including the inversion integration method and contour integral method. For this method, we have to follow the equation given below:

      The inverse z transform is obtained by summing up all the residues of this equation from the poles point of view.

      Power Series of Inverse z Transform

      In the end, this is the simplest and easiest way to find the z transform. In this method, the equation is expanded and we get the series of ascending powers. After that, the coefficient of z^-n is the required result. 

      Inverse z transform in MATLAB 

      To apply the inverse z transform in MATLAB, we use the following formula:

      iztrans(x)

      For convenience, we have put the process of z transform and the inverse in the same code. 

      Code:

      syms n;

      f=sin(2*n)

      F=ztrans(f)

      G=iztrans(F)

      Output:

      You can clearly see that we got our required result easily. This type of transforms is used when the data is being transferred to different networks. There are many applications of this transform and you will learn about them in the next section. 

      We have started the discussion of another transform named the z transform that is somehow similar to the Laplace transform that we have learned in the previous sessions. Z transform is a useful mathematical tool and we studied the ROC, procedure, and the inverse method of z transform. We also saw some examples in MATLAB. In the next lecture, we are going to learn some other basic information on the same topic.

      Speech Recognition System Using Raspberry pi 4

      Thank you for joining us for yet another session of this series on Raspberry Pi programming. In the preceding tutorial, we created a pi-hole ad blocker for our home network using raspberry pi 4. We also learned how to install pi-hole on raspberry pi four and how to access it in any way with other devices. This tutorial will implement a speech recognition system using raspberry pi and use it in our project. First, we will learn the fundamentals of speech recognition, and then we will build a game that uses the user's voice to play it and discover how it all works with a speech recognition package.

      Here, you'll learn:

      • The basics of voice recognition
      • On PyPI, what packages may be found?
      • Utilize the SpeechRecognition package with a wide range of useful features.
      Where To Buy?
      No.ComponentsDistributorLink To Buy
      1Raspberry Pi 4AmazonBuy Now

      Components

      • Raspberry pi 4
      • Microphone
       

      A Brief Overview of Speech Recognition

      Are you curious about how to incorporate speech recognition into a Python program? Well, when it comes to conducting voice recognition in Python, there are a few things you need to know first. I'm not going to overwhelm you with the technical specifics because it would take up an entire book. Things have gone a long way when it comes to modern voice recognition technologies. Several speakers can be recognized and have extensive vocabulary in several languages.

      Voice is the first element of speech recognition. A mic and an analog-to-digital converter are required to turn speech into an electronic signal and digital data. The audio can be converted to text using various models once it has been digitized.

      Markov models are used in most modern voice recognition programs. It is assumed that audio signals can be reasonably represented as a stationary series when seen over a short timescale.

      The audio signals are broken into 10-millisecond chunks in a conventional HMM. Each fragment's spectrogram is converted into a real number called cepstral coefficients. The dimensions of this cepstral might range from 10 to 32, depending on the device's accuracy. These vectors are the end product of the HMM.

      Training is required for this calculation because the voice of a phoneme changes based on the source and even within a single utterance by the same person. The most probable word to produce the specified phoneme sequence is determined using a particular algorithm.

      This entire process could be computationally costly, as one might expect. Before HMM recognition, feature transformations and dimension reduction methods are employed in many current speech recognition programs. It is also possible to limit an audio input to only those parts which are probable to include speech using voice detectors. As a result, the recognizer does not have to waste time studying sections of the signal that aren't relevant.

      Choosing a Speech Recognition Tool

      There are a few speech recognition packages in PyPI. There are a few examples:

      NLP can discern a user's purpose in some of these programs, which goes beyond simple speech recognition. Several other services are focused on speech-to-text conversion alone, such as Google Cloud-Speech.

      SpeechRecognition is the most user-friendly of all the packages.

      Voice recognition necessitates audio input, which SpeechRecognition makes a cinch. SpeechRecognition will get you up to speed in minutes rather than requiring you to write your code for connecting mics and interpreting audio files.

      Since it wraps a variety of common speech application programming interfaces, this SpeechRecognition package offers a high degree of extensibility. The SpeechRecognition library is a fantastic choice for every Python project because of its flexibility and ease of usage. The APIs it encapsulates may or may not be able to support every feature. For SpeechRecognition to operate in your situation, you'll need to research the various choices.

      You've decided to give SpeechRecognition ago, and now you need to get it deployed in your environment.

      Speech Recognition Software Installation

      Using pip, you may set up Speech Recognition software in the terminal:

      $ pip install SpeechRecognition

      When you've completed the setup, you should start a command line window and type:

      Import speech_recognition as sr

      Sr.__version__

      Let's leave this window open for now. Soon enough, you'll be able to use it.

      If you only need to deal with pre-existing audio recordings, Speech Recognition will work straight out of the box. A few prerequisites are required for some use cases, though. In particular, the PyAudio library must record audio from a mic.

      As you continue reading, you'll discover which components you require. For the time being, let's look at the package's fundamentals.

      Recognizer Class

      The recognizer is at the heart of Speech Recognition's magic.

      Naturally, the fundamental function of a Recognizer class is to recognize spoken words and phrases. Each instance has a wide range of options for identifying voice from the input audio.

      The process of setting up a Recognizer is straightforward. It's as simple as typing "in your active interpreter window."

      sr.Recognizer()

      There are seven ways to recognize the voice from input audio by utilizing a distinct application programming interface in each Recognizer class. The following are examples:

      Aside from recognizing sphinx(), all the other functions fail to work offline using CMU Sphinx. Internet access is required for the remaining six activities.

      This tutorial does not cover all of the capabilities and features of every Application programming interface in detail. Speech Recognition comes with a preset application programming interface key for the Google Speech Application programming interface, allowing you to immediately get up and running with the service. As a result, this tutorial will extensively use the Web Speech Application programming interface. Only the Application programming interface key and the user are required for the remaining six application programming interfaces.

      Speech Recognition provides a default application programming interface key for testing reasons only, and Google reserves the right to cancel it at any time. Using the Google Web application programming interface in a production setting is not recommended. There is no method to increase the daily request quota, even if you have a valid application programming interface key. If you learn how to use the Speech Recognition application programming interface today, it will be straightforward to apply to any of your projects.

      Whenever a recognize function fails to recognize the voice, it will output an error message. Request Error if the application programming interface is unavailable. A faulty Sphinx install could cause this in the case of recognizing sphinx(). If quotas are exceeded, servers are unreachable, or there isn't internet service, a Request Error will be raised for all the six methods.

      Let us use recognize google() in our interpreter window and see if it works!

      Exactly what has transpired?

      Something like this is most likely what you've gotten.

      I'm sure you could have foreseen this. How is it possible to tell something from nothing?

      The Recognizer function recognize() expects an audio data parameter. If you're using Speech Recognition, then audio data should become an instance of the audio data class.

      To construct an AudioData instance, you have two options: you can either use an audio file or record your audio. We'll begin with audio files because they're simpler to work with.

      Using Audio Files

      To proceed, you must first obtain and save an audio file. Use the same location where your Python interpreter is running to store the file.

      Speech Recognition's AudioFile interface allows us to work with audio files easily. As a context manager, this class gives the ability to access the information of an audio file by providing a path to its location.

      File Formats that are supported

      This software supports various file formats, which include:

      • WAV
      • AIFF
      • FLAC

      You'll need to get a hold of the FLAC command line and a FLAC encoding tool.

      Recording data using the record() Function

      To play the "har.wav" file, enter the following commands into your interpreter window:

      har = sr.AudioFile('har.wav')

      with harvard as source:

      audio = r.record(source)

      Using the AudioFile class source, the context manager stores the data read from the file. Then, using the record() function, the full file's data is saved to an AudioData class. Verify this by looking at the format of the audio:

      type(audio)

      You can now use recognize_google() to see if any voice can be found in the audio file. You might have to wait a few seconds for the output to appear, based on the speed of your broadband connection.

      r.recognize_google(audio)

      Congratulations! You've just finished your very first audio transcription!

      Within the "har.wav" file, you'll find instances of Har Phrases if you're curious. In 1965, the IEEE issued these phrases to evaluate telephone lines for voice intelligibility. VoIP and telecom testing continue to make use of them nowadays.

      Seventy-two lists of 10 phrases are included in the Har Phrases. On the Open Voice Repository webpage, you'll discover a free recording of these words and phrases. Each language has its own set of translations for the recordings. Put your code through its paces; they offer many free resources.

      Segments with a start and end time

      You may want to record a small section of the speaker's speech. The record() method accepts the duration term parameter, which terminates the program after a defined amount of time.

      Using the example above, the first 4 secs of the file will be saved as a transcript.

      with har as source:

      audio = r.record(source, duration=4)

      r.recognize_google(audio)

      In the files stream, utilize the record() function within a block. As a result, the 4 secs of audio you recorded for 4 seconds will be returned when you record for 4 seconds again.

      with har as source:

      audio1 = r.record(source, duration=4)

      audio2 = r.record(source, duration=4)

      r.recognize_google(audio1)

      r.recognize_google(audio2)

      As you can see, the 3rd phrase is contained within audio2. When a timeframe is specified, the recorder can cease in the middle of a word. This can harm the transcript. In the meantime, here's what I have to say about this.

      The offset keywords arguments can be passed to the record() function combined with a recording period. Before recording, this setting specifies how many frames of a file to disregard.

      with har as source:

      audio = r.record(source, offset=4, duration=3)

      r.recognize_google(audio)

      Using the duration and the offset word parameters can help you segment an audio track if you understand the language structure beforehand. They can, however, be misused if used hurriedly. Using the following command in your interpreter should get the desired result.

       

      with har as source:

      audio = r.record(source, offset=4.7, duration=2.8)

      r.recognize_google(audio)

      The application programming interface only received "akes heat," which matches "Mesquite," because "it t" half of the sentence was missed.

      You also recorded "a co," the first word of the 3rd phrase after the recording. The application programming interface matched this to "Aiko."

      Another possible explanation for the inaccuracy of your transcriptions is human error. Noise! Since the audio is relatively clean, the instances mentioned above all worked. Noise-free audio cannot be expected in the actual world except if the soundtracks can be processed in advance.

      Noise Can Affect Speech Recognition.

      Noise is an unavoidable part of everyday existence. All audiotapes have some noise level, and speech recognition programs can suffer if the noise isn't properly handled.

      I listened to the "jackhammer" audio sample to understand how noise can impair speech recognition. Ensure to save it to the root folder of your interpreter session.

      The sound of a jackhammer is heard in the background while the words "the stale scent of old beer remains" are spoken.

      Try to translate this file and see what unfolds.

      jackmer = sr.AudioFile('jackmer.wav')

      with jackhammer as source:

      audio = r.record(source)

      r.recognize_google(audio)

      How wrong!

      So, how do you go about dealing with this situation? The Recognizer class has an adjust for ambient noise() function you might want to give a shot.

      with jackmer as source:

      r.adjust_for_ambient_noise(source)

      audio = r.record(source)

      r.recognize_google(audio)

      You're getting closer, but it's still not quite there yet. In addition, the statement's first word is missing: "the." How come?

      Recognizer calibration is done by reading the first seconds of the audio stream and adjusting for noise level. As a result, the stream has already been consumed when you run record() to record the data.

      Adjusting ambient noise() takes the duration word parameter to change the time frame for analysis. The default value for this parameter is 1, but you can change it to whatever you choose. Reduce this value by half.

      with jackmer as a source:

      r.adjust_for_ambient_noise(source, duration=0.5)

      audio = r.record(source)

      r.recognize_google(audio)

      Now you've got a whole new set of problems to deal with after getting "the" at the start of the sentence. There are times when the noise can't be removed from the signal because it simply has a lot of noise to cope with. That's the case in this particular file.

      These problems may necessitate some sound pre-processing if you encounter them regularly. Audio editing programs, which can add filters to the audio, can be used to accomplish this. For the time being, know that background noise can cause issues and needs to be handled to improve voice recognition accuracy.

      Application programming interface responses might be useful whenever working with noisy files. There are various ways to parse the JSON text returned by most application programming interfaces. For the recognize google() function to produce the most accurate transcription, you must explicitly request it.

      Using the recognize google() function and the show all boolean argument will do this.

      r.recognize_google(audio, show_all=True)

      A transcript list can be found in the dictionary returned by recognizing google(), with the entry 'alternative .'This response format varies in different application programming interfaces, but it's primarily useful for debugging purposes when you get it.

      As you've seen, the Speech Recognition software has a lot to offer. Aside from gaining expertise with the offsets and duration arguments, you also learned about the harmful effects noise has on transcription accuracy.

      The fun is about to begin. Make your project dynamic by using a mic instead of transcribing audio clips that don't require any input from the user.

      Using Microphone

      For Speech Recognizer to work, you must obtain the PyAudio library.

      Install PyAudio

      Use the command below to install pyaudio in raspberry pi:

      sudo apt-get install python-pyaudio python3-pyaudio

      Confirmation of Successful Setup

      Using the console, you can verify that PyAudio is working properly.

      python -m speech_recognition

      Ensure your mic is turned on and unmuted. This is what you'll see if everything went according to plan:

      Let SpeechRecognition translate your voice by talking into your mic and discovering its accuracy.

      Microphone instance

      The recognizer class should be created in a separate interpreter window.

      import speech_recognition as sr

      r = sr.Recognizer()

      After utilizing an audio recording, you'll use the system mic as your input. Instantiation your Microphone interface to get at this information!

      mic = sr.Microphone()

      For raspberry pi, you must provide a device's index to use a certain mic. For a list of microphones, simply call our Mic class function.

      Sr.Microphone.list_microphone_names()

      Keep in mind that the results may vary from those shown in the examples.

      You may find the mic's device index using the list microphone names function. A mic instance might look like this if you wanted to use the "front" mic, which has a value of Three in the output.

      mic = sr.Microphone(device_index=3)

      Use listen() to record the audio from the mic

      A Mic instance is ready, so let's get started recording.

      Similar to AudioFile, Mic serves as a context manager for the application. The listen() function of the Recognizer interface can be used in the with section to record audio from the mic. This technique uses an input source as its initial parameter to capture audio until quiet is invoked.

      with mic as source:

      audio = r.listen(source)

      Try saying "hi" into your mic once you've completed the block. Please be patient as the interpreter prompts reappear. Once you hear the ">>>" prompt again, you should be able to hear the voice.

      r.recognize_google(audio)

      If the message never appears again, your mic is probably taking up the excessive background noise. Ctrl then C key can halt the execution and restore your prompts.

      Recognizer class's adjustment of ambient noise() method must be used to deal with the noise level, much like you did while attempting to decipher the noisy audio track. It's wise to do this whenever you're listening for mic input because it's less unpredictable than audio file sources.

      with mic as source:

      r.adjust_for_ambient_noise(source)

      audio = r.listen(source)

      Allow for adjustment of ambient noise() to finish before speaking "hello" into the mic after executing the code mentioned above. Be patient as the interpreter's prompts reappear before ascertaining the speech.

      Keep in mind that the audio input is analyzed for a second by adjusting ambient noise(). Using the duration parameter, you can shorten it if necessary.

      According to the website, not under 0.5 secs is recommended by the Speech Recognition specification. There are times when greater durations are more effective. The lower the ambient noise, the lower the value you need. Sadly, this knowledge is often left out of the development process. In my opinion, the default one-second duration is sufficient for most purposes.

      How to handle speech that isn't recognizable?

      Using your interpreter, type in the above code snippet and mutter anything nonsensical into the mic. You may expect a response such as this:

      An UnknownValueError exception is thrown if the application programming interface cannot translate speech into text. You must always encapsulate application programming interface requests in try and except statements to address this problem.

      Getting the exception thrown may take more effort than you imagine. When it comes to transcribing vocal sounds, the API puts in a lot of time and effort. For me, even the tiniest of noises were translated into words like "how." A cough, claps of the hands, or clicking the tongue would all raise an exception.

      A "Guess the Word" game to Put everything together

      To put what you've learned from the SpeechRecognition library into practice, develop a simple game that randomly selects a phrase from a set of words and allows the player three tries to guess it.

      Listed below are all of the scripts:

      import random

      import time

      import speech_recognition as sr

      def recognize_speech_from_mic(recognizer, microphone):

      if not isinstance(recognizer, sr.Recognizer):

      raise TypeError("`recognizer` must be `Recognizer` instance")

      if not isinstance(microphone, sr.Microphone):

      raise TypeError("`microphone` must be `Microphone` instance")

      with microphone as source:

      recognizer.adjust_for_ambient_noise(source)

      audio = recognizer.listen(source)

      response = {

      "success": True,

      "error": None,

      "transcription": None

      }

       

      try: response["transcription"] = recognizer.recognize_google(audio)

      except sr.RequestError:

      response["success"] = False

      response["error"] = "API unavailable"

      except sr.UnknownValueError:

      response["error"] = "Unable to recognize speech"

      return response

      if __name__ == "__main__":

      WORDS = ["apple", "banana", "grape", "orange", "mango", "lemon"]

      NUM_GUESSES = 3

      PROMPT_LIMIT = 5

      recognizer = sr.Recognizer()

      microphone = sr.Microphone()

      word = random.choice(WORDS)

      instructions = (

      "I'm thinking of one of these words:\n"

      "{words}\n"

      "You have {n} tries to guess which one.\n"

      ).format(words=', '.join(WORDS), n=NUM_GUESSES)

      print(instructions)

      time.sleep(3)

      for i in range(NUM_GUESSES):

      for j in range(PROMPT_LIMIT):

      print('Guess {}. Speak!'.format(i+1))

      guess = recognize_speech_from_mic(recognizer, microphone)

      if guess["transcription"]:

      break

      if not guess["success"]:

      break

      print("I didn't catch that. What did you say?\n")

      if guess["error"]:

      print("ERROR: {}".format(guess["error"]))

      break

      print("You said: {}".format(guess["transcription"]))

      guess_is_correct = guess["transcription"].lower() == word.lower()

      user_has_more_attempts = i < NUM_GUESSES - 1

      if guess_is_correct:

      print("Correct! You win!".format(word))

      break

      elif user_has_more_attempts:

      print("Incorrect. Try again.\n")

      else:

      print("Sorry, you lose!\nI was thinking of '{}'.".format(word))

      break

      Let's analyze this a little bit further.

      There are three keys to this function: Recognizer and Mic. It takes these two as inputs and outputs a dictionary. The "success" value indicates the success or failure of the application programming interface request. It is possible that the 2nd key, "error," is a notification showing that the application programming interface is inaccessible or that a user's speech was incomprehensible. As a final touch, the audio input "transcription" key includes a translation of all of the captured audio.

      A TypeError is raised if the recognition system or mic parameters are invalid:

      Using the listen() function, the mic's sound is recorded.

      For every call to recognize speech from the mic(), the recognizer is re-calibrated using the adjust for ambient noise() technique.

      After that, whether there is any voice in the audio, recognize function is invoked to translate it. RequestError and UnknownValueError are caught by the try and except block and dealt with accordingly. Recognition of voice from a microphone returns a dictionary containing the success, error, and translated voice of the application programming interface request and the dictionary keys.

      In an interpreter window, execute the following code to see if the function works as expected:

      import speech_recognition as sr

      from guessing_game import recognize_speech_from_mic

      r = sr.Recognizer()

      m = sr.Microphone()

      recognize_speech_from_mic(r, m)

      The actual gameplay is quite basic. An initial set of phrases, a maximum of guesses permitted, and a time restriction are established:

      Once this is done, a random phrase is selected from the list of WORDS and input into the Recognizer and Mic instances.

      After displaying some directions, the condition statement is utilized to handle each user's attempts at guessing the selected word. This is the first operation that happens inside of the first loop. Another loop tries to identify the person's guesses at least PROMPT LIMIT instances and stores the dictionary provided to a variable guess.

      Otherwise, a translation was performed, and the closed-loop will end with a break in case the guess "transcription" value is unknown. False is set as an application programming interface error when no audio is transcribed; this causes the loop to be broken again with a break. Aside from that, the application programming interface request was successful; nonetheless, the speech was unintelligible. As a precaution, the for loop repeatedly warns the user, giving them a second chance to succeed.

      If there are any errors inside the guess dictionary, the inner loop will be terminated again. An error notice will be printed, and a break is used to exit the outer for loop, which will stop the program execution.

      Transcriptions are checked for accuracy by comparing the entered text to a word drawn at random. As a result, the lower() function for text objects is employed to ensure a more accurate prediction. In this case, it doesn't matter if the application programming interface returns "Apple" or "apple" as the speech matching the phrase "apple."

      If the user's estimate was correct, the game is over, and they have won. The outermost loop restarts when a person guesses incorrectly and a fresh guess is found. Otherwise, the user will be eliminated from the contest.

      This is what you'll get when you run the program:

      Recognition of Other Languages

      Speech recognition in other languages, on the other hand, is entirely doable and incredibly simple.

      The language parameter must be set to the required string to use the recognize() function in a language other than English.

      r = sr.Recognizer()

      with sr.AudioFile('path/to/audiofile.wav') as source:

      audio = r.record(source)

      r.recognize_google(audio, language='fr-FR')

      There are only a few methods that accept-language keywords:

      What are the applications of speech recognition software?

      1. Mobile Payment with Voice command

      Do you ever have second thoughts about how you're going to pay for future purchases? Has it occurred to you that, in the future, you may be able to pay for goods and services simply by speaking? There's a good chance that will happen soon! Several companies are already developing voice commands for money transfers.

      This system allows you to speak a one-time passcode rather than entering a passcode before buying the product. When it comes to online security, think of captchas and other one-time passwords that are read aloud. This is a considerably better option than reusing a password every time. Soon, voice-activated mobile banking will be widely used.

      1. AI Assistants

      When driving, you may use such Intelligent systems to get navigation, perform a Google search, start a playlist of songs, or even turn on the lights in your home without touching your gadget. These digital assistants are programmed to respond to every voice activation, regardless of the user.

      There are new technologies that enable Ai applications to recognize individual users. This tech, for instance, allows it to respond to the voice of a certain person exclusively. Using an iPhone as an example, it's been around for a few years now. If you want Siri to only respond to your commands and queries when you speak to it, you can do so on your iPhone. Unauthorized access to your gadgets, information, and property is far less possible when your voice can only activate your Artificial intelligent assistant. Anyone who is not permitted to use the assistant will not be able to activate it. Other uses for this technology are almost probably on the horizon.

      1. Translation Application

      In a distant place, imagine attempting to check into an unfamiliar hotel. Since neither you nor the front desk employee is fluent in the other country's language, no one is available to act as a translator. You can use the translator device to talk into the microphone and have your speech processed and translated verbally or graphically to communicate with another person.

      Additionally, this tech can benefit multinational enterprises, educational institutions, or other institutions. You can have a more productive conversation with anyone who doesn't speak your language, which helps break down the linguistic barrier.

      Conclusion

      There are many ways to use the SpeechRecognition program, including installing it and utilizing its Recognizer interface, which may be used to recognize audio from both files and the mic. You learned how to use the record offset and the duration keywords to extract segments from an audio recording.

      The recognizer's tolerance to noise level can be adjusted using the adjust for the ambient noise function, which you've seen in action. Recognizer instances can throw RequestErrors and UnknownValueErrors, and you've learned how to manage them with try and except block.

      More can be learned about speech recognition than what you've just read. We will implement the RTC module integration in our upcoming tutorial to enable real-time control.

      Smart Security System using Facial Recognition with Raspberry Pi 4

      Greeting, and welcome to the next tutorial of our raspberry programming tutorial. In the previous tutorial, we learned how to build a smart attendance system using an RFID card reader, which we used to sign in students in attendance in a class. When it comes to building a face-recognition program on a Raspberry Pi, this tutorial will show you how. Two Python programs will be used in the lesson, one of which is a Training program that analyzes a collection of photographs of a certain individual and generates a dataset. (YML File). The Recognizer application uses the YML script to detect a face and afterward utters the person's name when the face is detected.

      Where To Buy?
      No.ComponentsDistributorLink To Buy
      1BreadboardAmazonBuy Now
      2DC MotorAmazonBuy Now
      3Jumper WiresAmazonBuy Now
      4Raspberry Pi 4AmazonBuy Now

      Components

      • Raspberry Pi
      • Breadboard
      • L293 or SN755410 motor driver chip
      • Jumper wires
      • DC motor
      • 5v power supply

      A growing number of us already use face recognition software without realizing it. Facial recognition is used in several applications, from basic Fb Tag suggestions to advanced security screening surveillance. Chinese schools employ facial recognition to track students' adherence and behaviour for the first time. Retail stores use face recognition to classify their clients and identify those who have a history of crime. There's no denying that this tech will be all over soon, especially with so many other developments in the works.

      How does facial recognition work?

      When it comes to facial recognition, biometric authentication goes well beyond simply being able to identify human faces in images or videos. An additional step is taken to identify the person's identity. A facial recognition software compares an image of a person's face to a database to see if the features match another person's. Since facial expressions and hair do not affect the technology's ability to identify matches, it has been built to do so.

      How can face recognition be used when it comes to smart security systems?

      The first thing you should do if you want to make your home "smart" is to focus on security. Your most prized possessions are housed at this location, and protecting them is a must. You can monitor your home security status from your computer or smartphone thanks to a smart security system when you're outdoors.

      Installing a system that is not wireless in your house and signing you up for professional monitoring was traditionally done by a security company. The plot has been rewritten. When setting up a smart home system, you can even do it yourself. In addition, your smart smartphone acts as a professional monitor, providing you with real-time information and notifications.

      Face recognition is the ability of a smart camera in your house to identify a person based on their face. Consequently, you will have to inform the algorithm what face goes with what name for face recognition to operate. Facial detection in security systems necessitates the creation of user accounts for family members, acquaintances, and others you want to be identified by the system. Your doors or the inside of your house will be alerted when they arrive.

      Face-recognition technology allows you to create specific warning conditions. For example, you can configure a camera to inform you when an intruder enters your home with a face the camera doesn't recognize.

      Astonishing advancements in smart tech have been made in recent years. Companies are increasingly offering automatic locks with face recognition. You may open your doors just by smiling at a face recognition system door lock. You could, however, use a passcode or a real key to open and close the smart door. You may also configure your smart house lock to email you an emergency warning if someone on the blacklist tries to unlock your smart security door.

      How to install OpenCV for Raspberry Pi 4.

      OpenCV, as previously stated, will be used to identify and recognize faces. So, before continuing, let's set up the OpenCV library. Your Pi 4 needs a 2A power adapter and an HDMI cable because we won't be able to access the Pi's screen through SSH. The OpenCV documentation is a good place to learn how image processing works, but I'm not going to go into it here.

      Installing OpenCV using pip

      pip is well-known for making it simple to add new libraries to the python language. In addition, there is a technique to install OpenCV on a Raspberry Pi via PIP, but it didn't work for me. We can't obtain complete control of the OpenCV library when using pip to install OpenCV; however, this might be worth a go if time is of the essence.

      Ensure pip is set up on your Raspberry Pi. Then, one by one, execute the lines of code listed below into your terminal.

      sudo apt-get install libhdf5-dev libhdf5-serial-dev

      sudo apt-get install libqtwebkit4 libqt4-test

      sudo pip install opencv-contrib-python?

      How OpenCV Recognizes Face

      Facial recognition and face detection are not the same things, and this must be clarified before we proceed. When simply a user's face is detected using Face detection, the program has no clue who that person is. Only the face will be detected in facial recognition software, but the program will also recognize it. At this point, it's pretty evident that facial detection comes before facial recognition. To explain how OpenCV recognizes a person or other objects, I will have to go into detail.

      Essentially, a webcam feed is like a long series continuously updating still photos. And every image is nothing more than a jumble of pixels with varying values arranged in a specific order. So, how does a computer software identify a face among all of these random pixels? Trying to describe the underlying techniques is outside the scope of this post, but since we're utilizing the OpenCV library, facial recognition is a straightforward process that doesn't necessitate a deeper understanding of the underlying principles.

      Using Cascade Classifiers for Face Detection

      We can only recognize a person if we can see it. Detection of an item, including a face, Classifiers are a feature of OpenCV. They are pre-trained datasets that may be utilized to recognize a certain item, such as a face. Classifiers may also detect additional objects, such as the mouth, the eyebrows, the number plate of a vehicle, and smiles.

      Alternatively, OpenCV allows you to design your custom Classifier for detecting any objects in images by retraining the cascade classifier. For the sake of this tutorial, we'll be using the classifier named "haarcascade_frontalface_default.xml" to identify faces from the camera. We'll learn more about image classifiers and how to apply them in code in the following sections.

      Setup the raspberry pi camera

      For the face training and detection, we only need the pi camera, and to install this, insert the raspberry pi camera in the pi camera slot as shown below. Then go to your terminal, open the configuration window using "sudo raspi-config", and press enter. Navigate to the interface options and activate the pi camera module. Accept the changes and finish the setup. Then reboot your RPi.

      How to Setup the Necessary Software

      First, ensure pip is set up, and then install the following packages using it.

      Install dlib: Dlib is a set of libraries for building ML and data analysis programs in the real world. To get dlib up and running, type the following command into your terminal window.

      Pip install dlib

      If everything goes according to plan, you should see something similar after running this command.

      Install pillow: The Python Image Library, generally known as PIL, is a tool for opening, manipulating, and saving images in various formats. The following command will set up PIL for you.

      pip install pillow

      You should receive the message below once this app has been installed.

      Install face_recognition: The face recognition package is often the most straightforward tool for detecting and manipulating human faces. Face recognition will be made easier with the help of this library. Installing this library is as simple as running the provided code.

      Pip install face_recognition –no –cache-dir

      If all goes well, you should see something similar to the one shown below after the installed software. Due to its size, I used the "—no –cache-dir" command-line option to configure the package without keeping any of its cache files.

      Face Recognition Folders

      A script named "haarcascade_frontalface_default.xml" is for detecting faces using a Classifier. It will also build a "face-trainner.yml" file using the training script based on the photos found in the face images directory.

      Start the face images folder with a collection of face images.

      The face images folder indicated above should contain subdirectories with the names of each person to be identified and several sample photographs of them. Esther and x have been identified for this tutorial. As a result, I've just generated the two sub-directories shown below, each containing a single image.

      You must rename the directory and replace the photographs with the names of the people you are identifying. It appears that a minimum of five images for each individual is optimal. However, the more participants, the slower the software will run.

      Face trainer program

      Face Trainer.py is a Python software that may be used to train a new face. The purpose of the software is to access the face photographs folder and scan for faces. As soon as it detects a face, it crops it, turns it to grayscale, and saves it in a file named face-trainner.yml using the face recognition package we had previously loaded. The information in this script can be used to identify the faces later. In addition to the whole Trainer program provided at the conclusion, we'll go over some more critical lines.

      The first step is to import the necessary modules. The cv2 package is utilized to process photos. The NumPy library can be used for image conversion, the operating system package is used for directory navigation, and PIL will be used to process photos.

      import cv2

      import numpy as np

      import os

      from PIL import Image

      Ensure that the XML file in question is located in the project directory to avoid encountering an issue. The LBPH Facial recognizer is then constructed using the recognizer parameter.

      face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

      recognizer = cv2.createLBPHFaceRecognizer()

      Face_Images = os.path.join(os.getcwd(), "Face_Images")

      In order to open all of the files ending in.jpg,.jpg, or .png within every subfolder in the face images folder, we must traverse the tree with for loops. In a variable named path, we record the path to every image, and in a variable named person name, we store the file location name (the name of the user who uploaded the images).

      For root, dirs, files in os.walk(Face_Images):

      for file in files: #check every directory in it

      if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

      path = os.path.join(root, file)

      person_name = os.path.basename(root)

      As a result, in case the name of the person changes, we increase a variable named Face_ID that will allow us to have a unique Face_ID for each individual.

      if pev_person_name!=person_name:

      Face_ID=Face_ID+1 #If yes increment the ID count

      pev_person_name = person_name

      Because the BGR values may be ignored, grayscale photos are simpler for OpenCV to deal with than colourful ones. We transform the data to grayscale and afterwards lower the image size by 50% so that all the pictures are the same size. To avoid having your face cut out, place it in the centre of the photo. To get a numerical number for these photos, transform them into NumPy arrays. Afterwards, a classifier identifies a face in a photo and saves the results in variable named faces.

      Gery_Image = Image.open(path).convert("L")

      Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

      Final_Image = np.array(Crop_Image, "uint8")

      faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

      Our Area of Attention will be the portion of the image where the face may be found after being cropped. It will be utilized to train the face-recognition system in the ROI area. Every area of attention face must be appended to a variable named x train. We then feed the recognizer with our training data using the area of attention values and Face ID data. The information gathered will be archived.

      for (x,y,w,h) in faces:

      roi = Final_Image[y:y+h, x:x+w]

      x_train.append(roi)

      y_ID.append(Face_ID)

       

      recognizer.train(x_train, np.array(y_ID))

      recognizer.save("face-trainner.yml")

      You'll notice that the face-trainner.yml script is modified whenever you run this program. If you make any modifications to the photographs in the Face Images folder, ensure to recompile this code. Debugging purposes include printing out the Face ID, name of the path, name of a person, and NumPy arrays.

      Face recognition program

      We can begin using our trained data to identify people now that it has been prepared. We'll use a USB webcam or pi camera to feed video into the Face recognizer application, turning it into an image. Once we've found the faces in those images, we'll find similarities to all of our previously developed Face IDs. Finally, we output the identified person’s name in boxes around their face. Afterwards, the whole program is presented, and the explanation is below.

      Import the required module from the training program and use the classifier because we need to do more facial detection in this program.

      import cv2

      import numpy as np

      import os

      from time import sleep

      from PIL import Image

      face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

      recognizer = cv2.createLBPHFaceRecognizer()

      The people listed in the folder should be entered in the variable named labels. Insist on performing each step in the same order. It is "Esther" and "Unknown" in my situation.

      labels = ["Esther", "Unknown"]

      We need the trainer file to detect faces, so we import it into our software.

      recognizer.load("face-trainner.yml")

      The camera provides the video stream. It's possible to access any second pi camera by replacing 0 with 1.

      cap = cv2.VideoCapture(0)

      In the next step, we separate the footage into images and transform it into grayscale, and afterwards, we search for a face in the photo. To save the area of attention grey image, we must first detect the face and then crop the image to remove them.

      ret, img = cap.read()

      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

      faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

      for (x, y, w, h) in faces:

      roi_gray = gray[y:y+h, x:x+w]

      id_, conf = recognizer.predict(roi_gray)

      It informs us how sure the program is in its ability to identify the person. We write the code below to get the person's name based on their Identification number. A square should be drawn around the user's head, written outside their name.

      if conf>=80:

      font = cv2.FONT_HERSHEY_SIMPLEX

      name = labels[id_]

      cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

      cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)

      We must playback and afterwards break the video stream we just evaluated, which is done by pressing a wait key.

      cv2.imshow('Preview',img)

      if cv2.waitKey(20) & 0xFF == ord('q'):

      break

      While running this application, ensure the Raspberry is linked to a display via HDMI. A display with your video stream and the name will appear when you open the application. There will be a box around the face identified in the video feed, and if your software recognizes the face, it displays that person’s name. As evidenced by the image below, we've trained our software to identify my face, which shows the recognition process in action.

      The face recognition code

      import cv2

      import numpy as np

      import os

      from PIL import Image

      labels = ["Esther", "Unknown"]

      face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

      recognizer = cv2.createLBPHFaceRecognizer()

      recognizer.load("face-trainner.yml")

      cap = cv2.VideoCapture(0)

      while(True):

      ret, img = cap.read()

      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

      faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5) #Recog. faces

      for (x, y, w, h) in faces:

      roi_gray = gray[y:y+h, x:x+w]

      id_, conf = recognizer.predict(roi_gray)

      if conf>=80:

      font = cv2.FONT_HERSHEY_SIMPLEX

      name = labels[id_]

      cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

      cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

      cv2.imshow('Preview',img)

      if cv2.waitKey(20) & 0xFF == ord('q'):

      break

      cap.release()

      cv2.destroyAllWindows()

      Face detection code

      import cv2

      import numpy as np

      import os

      from PIL import Image

      face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

      recognizer = cv2.createLBPHFaceRecognizer()

       

      Face_ID = -1

      pev_person_name = ""

      y_ID = []

      x_train = []

      Face_Images = os.path.join(os.getcwd(), "Face_Images")

      print (Face_Images)

      for root, dirs, files in os.walk(Face_Images):

      for file in files:

      if file.endswith("jpeg") or file.endswith("jpg") or file.endswith("png"):

      path = os.path.join(root, file)

      person_name = os.path.basename(root)

      print(path, person_name)

      if pev_person_name!=person_name:

      Face_ID=Face_ID+1

      pev_person_name = person_name

      Gery_Image = Image.open(path).convert("L")

      Crop_Image = Gery_Image.resize( (550,550) , Image.ANTIALIAS)

      Final_Image = np.array(Crop_Image, "uint8")

      faces = face_cascade.detectMultiScale(Final_Image, scaleFactor=1.5, minNeighbors=5)

      print (Face_ID,faces)

      for (x,y,w,h) in faces:

      roi = Final_Image[y:y+h, x:x+w]

      x_train.append(roi)

      y_ID.append(Face_ID)

      recognizer.train(x_train, np.array(y_ID))

      recognizer.save("face-trainner.yml")

      DC motor circuit

      Since the "How to operate DC motor in Rpi 4" guide has covered the basics of controlling a DC motor, I won't provide much detail here. Please read this topic if you haven't already. Check all the wiring before using the batteries in your circuit, as outlined in the image above. Everything must be in place before connecting your breadboard's power lines to the battery wires.

      Testing

      To activate the motors, open the terminal because you'll use the Python code-writing program called Nano in this location. For those of you who aren't familiar with the command-line text editor known as Nano, I'll show you how to use some of its commands as we go.

      This code will activate the motor for two seconds, so try it out.

      import RPi.GPIO as GPIO

      from time import sleep

      GPIO.setmode(GPIO.BOARD)

      Motor1A = 16

      Motor1B = 18

      Motor1E = 22

      GPIO.setup(Motor1A,GPIO.OUT)

      GPIO.setup(Motor1B,GPIO.OUT)

      GPIO.setup(Motor1E,GPIO.OUT)

      print "Turning motor on"

      GPIO.output(Motor1A,GPIO.HIGH)

      GPIO.output(Motor1B,GPIO.LOW)

      GPIO.output(Motor1E,GPIO.HIGH)

      sleep(2)

      print "Stopping motor"

      GPIO.output(Motor1E,GPIO.LOW)

      GPIO.cleanup()

      The very first two lines of code tell Python whatever the program needs.

      The RPi.GPIO package is what the first line is looking for. The RPi GPIO pins are controlled by this module, which takes care of all the grunt work.

      It is necessary to delay the script for a few seconds to provide the package time to operate, therefore leaving a motor to run for a while.

      The method set mode is used to leverage the RPi's board numbers. We'll tell Python that the pins 16 through 22 correspond to the motors.

      Pin A is used to steer the L293D in one way, and pin B is used to direct it in the opposite direction. You can turn on the motor using an Enable pin, referred to as E, inside the test file.

      Finally, use GPIO.OUT to inform the RPi that all these are outputs.

      The RPi is ready to turn the motor after the software is set up. After a 2-second pause, some pins will be turned on and subsequently turned off, as seen in the code.

      Save and quit by hitting CTRL-X, and a confirmation notice appears at the bottom. To acknowledge, tap Y and Return. You can now run the program in the terminal and watch as the motor begins to spin up.

      sudo python motor.py

      If the motor doesn't move, check the cabling or power supply. The debug process might be a pain, but it's an important phase in learning new things!

      Now turn in the other direction.

      I'll teach you how to reverse a motor's rotation to spin in the opposite direction.

      There's no need to touch the wiring at this point; it's all Python. Create a new script called motorback.py to accomplish this. Using Nano, type the command:

      ./script

      Please type in the given program:

      import RPi.GPIO as GPIO

      from time import sleep

      GPIO.setmode(GPIO.BOARD)

      Motor1A = 16

      Motor1B = 18

      Motor1E = 22

      GPIO.setup(Motor1A,GPIO.OUT)

      GPIO.setup(Motor1B,GPIO.OUT)

      GPIO.setup(Motor1E,GPIO.OUT)

      print "Going forwards"

      GPIO.output(Motor1A,GPIO.HIGH)

      GPIO.output(Motor1B,GPIO.LOW)

      GPIO.output(Motor1E,GPIO.HIGH)

      sleep(2)

      print "Going backwards"

      GPIO.output(Motor1A,GPIO.LOW)

      GPIO.output(Motor1B,GPIO.HIGH)

      GPIO.output(Motor1E,GPIO.HIGH)

      sleep(2)

      print "Now stop"

      GPIO.output(Motor1E,GPIO.LOW)

      GPIO.cleanup()

      Save by pressing CTRL, then X, then Y, and finally Enter key.

      For reverse compatibility, we've set Motor1A low in the script.

      Programmers use the terms "high" and "low" to denote the state of being on or off, respectively.

      Motor1E will be turned off to halt the motor.

      Irrespective of what A is doing; the motor can be turned on or off using the Enable switch.

      Take a peek at the Truth Table to understand better what's going on.

      When Enabled, only two states allow the motor to move; A or B is high, and not both high at the same time.

      Putting it all together

      At this point, we have designed our face detection system and the dc motor control circuit; now, we will put the two systems to work together. When the user is verified, the dc motor should run to open the cd rom drive and close after a few seconds.

      In our verify code, we will copy the code below to spin the motor in one direction “open the door” when the user is verified. We will also increase the time to 5 seconds to simulate the door's time to open for the user to get through. This also allows the motor to spin long enough to open and close the cd room completely. I would also recommend putting a stopper on the cd room door so that it doesn't close all the war and get stuck.

      if conf>=80:

      font = cv2.FONT_HERSHEY_SIMPLEX

      name = labels[id_] #Get the name from the List using ID number

      cv2.putText(img, name, (x,y), font, 1, (0,0,255), 2)

      #place our motor code here

      GPIO.setmode(GPIO.BOARD)

      Motor1A = 16

      Motor1B = 18

      Motor1E = 22

       

      GPIO.setup(Motor1A,GPIO.OUT)

      GPIO.setup(Motor1B,GPIO.OUT)

      GPIO.setup(Motor1E,GPIO.OUT)

      Print("Openning")

      GPIO.output(Motor1A,GPIO.HIGH)

      GPIO.output(Motor1B,GPIO.LOW)

      GPIO.output(Motor1E,GPIO.HIGH)

      sleep(5)

      print("Closing")

      GPIO.output(Motor1A,GPIO.LOW)

      GPIO.output(Motor1B,GPIO.HIGH)

      GPIO.output(Motor1E,GPIO.HIGH)

      sleep(5)

      print("stop")

      GPIO.output(Motor1E,GPIO.LOW)

      GPIO.cleanup()

      cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

      Output

      The advantages of face recognition over alternative biometric verification methods for home security

      An individual's biometric identity can be verified by looking at various physical and behavioural characteristics, such as a person's fingerprint, keystrokes, facial characteristics, and voice. Face recognition seems to be the winner because of the precision, simplicity, and lack of contact detection.

      Face-recognition technology will continue and will get better over time. The tale has evolved, and your alternatives have grown due to smart tech.

      What are the advantages of employing Facial Recognition when it comes to smart home security?

      Using an RPi as a surveillance system means you can take it with you and use it wherever you need it.

      1. High accuracy rate

      For the most part, the face-recognition software employed in security systems can reliably assess whether or not the individual attempting entry matches your record of those authorized to enter. On the other hand, certain computer programs are more precise when it comes to identifying faces from diverse angles or different countries.

      Concerned users may be relieved to learn that some programs have the option of setting custom confidence criteria, which can significantly minimize the likelihood of the system giving false positives. Alternatively, 2-factor authentication can be used to secure your account.

      1. Automation

      When your smart security system discovers a match between a user and the list of persons you've given access to, it will instantly let them in. Answering the doorbell or allowing entry isn't necessary.

      1. Smart integration

      Face recognition solutions can be readily integrated into existing systems using an API.

      Cons of Facial Recognition

      1. Privacy of individuals and society as a whole is more at risk

      A major drawback of face recognition technology is that it puts people's privacy at risk. Having one's face collected and stored in an unidentified database does not sit well with the average person.

      Confidentiality is so important that several towns have prohibited law enforcement from using real-time face recognition monitoring. Rather than using live face recognition software, authorities can use records from privately-held security cameras in certain situations.

      1. can infringe on one's liberties

      Having your face captured and stored by face recognition software might make you feel monitored and assessed for your actions. It is a form of criminal profiling since the police can use face recognition to put everybody in their databases via a virtual crime lineup.

      1. It's possible to deceive modern technology.

      Face recognition technology can be affected by various other elements, including camera angle, illumination, and other aspects of a picture or video. Facial recognition software can be fooled by those who do disguises or alter their appearance.

      Conclusion

      This article walked us through creating a complete Smart Security System using a facial recognition program from the ground up. Our model can now recognize faces with the help of OpenCV image manipulation techniques. There are several ways to further your knowledge of supervised machine learning programming with raspberry pi 4, including adding an alarm to ring whenever an individual's face is not recognized or creating a database of known faces to act like a CCTV surveillance system. We'll design a security system with a motion detector and an alarm in the next session.

      Taking a screenshot in Raspberry pi 4

      Welcome to the next tutorial of our Raspberry Pi programming course. Our previous tutorial taught us to how to print from a Raspberry pi. We also discussed some libraries to create a print server in our raspberry pi. We will learn how to take screenshots on Raspberry Pi using a few different methods in this lesson. We will also look at how to take snapshots on our Raspberry Pi using SSH remotely.

      Where To Buy?
      No.ComponentsDistributorLink To Buy
      1BreadboardAmazonBuy Now
      2Jumper WiresAmazonBuy Now
      3PIR SensorAmazonBuy Now
      4Raspberry Pi 4AmazonBuy Now

      Why should you read this article?

      This article will assist you when working with projects that require snapshots for documenting your work, sharing, or generating tutorials.

      So, let us begin.

      Screenshots are said to be the essential items on the internet today. And if you have seen these screenshots in tutorial videos or even used them in regular communication, you're already aware of how effective screenshots can be. They are quickly becoming a key internet currency for more efficient communication. Knowing how and when to utilize the correct ones will help you stand out from the crowd.

      Requirements

      • Raspberry Pi
      • MicroSD Card
      • Power Supply
      • Ethernet Cable

      Taking Screenshots Using Scrot

      In this case, we'll employ Scrot, a software program, to help with the PrintScreen procedure. This fantastic software program allows you to take screenshots using commands, shortcut keys, and enabled shortcuts.

      Features of Scrot

      • We could easily snap screen captures using scrot with no other tasks.
      • We could also improve the image quality of screen photos by using the -q option and a level of quality from 1 to 100. The quality level is currently at 75 by default.
      • It is straightforward to set up and use.
      • We may capture a particular window or even a rectangle portion of the display using the button.
      • Capable of retrieving all screen captures in a specific directory and storing all screen captures on a distant Computer or networked server.
      • Automatically monitor multiple desktop PCs while the administrator is away and prevent unauthorized behaviors.

      Scrot is already installed by default in the latest release of the Raspbian Operating system. In case you already have Scrot, you may skip this installation process. If you're not sure whether it's already installed, use the command below inside a pi Terminal window.

      If your Pi returns a "command not found" error, you must install it. Use the following command line to accomplish this:

      After installing it, you may test its functionality by using the scrot instruction again. If no errors occur, you are set to go on.

      Capturing a snapshot on a Raspberry Pi isn't difficult, especially if Scrot is installed. Here are a handful of options for completing the work.

      1. Using a Keyboard Hotkey

      If you have the Scrot installed on your Pi successfully, your default hotkey for taking screenshots will be the Print Screen key.

      You can try this quickly by pressing the Print Screen button and checking the /home/pi directory. If you find the screenshots taken, your keyboard hotkey (keyboard shortcut) is working correctly.

      In addition, screenshots and print screen pictures will be stored with the suffix _scrot attached to the end of their filename. For instance,

      1. Using Terminal Window

      This is easy as pie! Execute the following command on your Pi to snap a screenshot:

      That is all. It is that easy.

      Taking a Delayed Screenshot

      The following approach will not work unless you have the navigation closed and have to snap a screenshot without the menu. To get a perfect snapshot with no menu, you must wait a few seconds after taking the picture. You may then close your menu and allow the Scrot to initiate the image capture.

      To capture in this manner, send the following command to postpone the operation for five seconds.

      Other Scrot settings are as follows:

      • scrot -b : for taking a window's border.
      • scrot -e : To issue a command after taking a snapshot.
      • scrot -h : To bring up an additional assistance panel.
      • scrot -t : To generate a snapshot thumbnail.
      • scrot -u : To take a screenshot of the currently active tab.
      • scrot -v : Scrot's most recent version will be displayed.

      Changing Screenshot Saving Location

      You might need to give the images a unique name and directory on occasion. Add the correct root directory, followed by the individual title and filename extension, exactly after scrot.

      For instance, if you prefer to assign the title raspberryexpert to it and store it in the downloads directory, do the following command:

      Remember that the extension should always follow the file name .png.

      Mapping the Screenshot Command to a Hotkey

      If the capture command isn't already mapped as a hotkey, you'll have to map it by altering your Pi's config file, and it'll come in handy.

      It would be best if you defined a hotkey inside the lxde-pi-rc.xml script to use it. To proceed, use this syntax to open the script.

      We'll briefly demonstrate how to add the snapshot hotkey to the XML script. It would be best to locate the <keyboard> section and put the following lines directly below it.

      We will map the scrot function to the snapshot hotkeys on the keyboard by typing the above lines.

      Save the script by hitting CTRL X, Yes, and afterward ENTER key when you've successfully added those lines.

      Enter the command below to identify the new changes made.

      How to Take a Screenshot Remotely over SSH

      You may discover that taking snapshots on the raspberry is impractical in some situations. You'll need to use SSH to take the image here.

      When dealing with Ssh, you must first activate it, as is customary. You may get more information about this in our previous tutorials.

      Log in with the command below after you have enabled SSH:

      Now use the command below to snap an image.

      If you've previously installed the Scrot, skip line 2.

      Using the command below, you can snap as many snapshots as you like using varying names and afterward transferring them over to your desktop:

      Remember to change the syntax to reflect the correct username and Ip.

      Saving the Screenshot Directly on your Computer

      you can snap a screenshot and save it immediately to your Linux PC. However, if you regularly have to take snapshots, inputting the passcode each time you access the Rpi via SSH will be a tedious chore. So you can use publicly or privately keys to configure no passcode ssh in raspberry pi.

      To proceed, use the following command to install maim on raspberry pi.

      Return to your computer and use the command below to take a snapshot.

      We're utilizing the maim instead of other approaches since it's a more elegant method. It sends the image to stdout, allowing us to save it to our laptop via a simple URL redirect.

      Taking Screenshots Using Raspi2png

      Raspi2png is a screenshot software that you may use to take a screenshot. Use the code below for downloading and installing the scripts.

      After that, please place it in the /usr/local/bin directory.

      Enter the following command to capture a screenshot.

      Ensure to use your actual folder name rather than <directory_name> used.

      Taking Screenshots Using GNOME Tool

      Because we are using a GUI, this solution is relatively simple and easy to implement.

      First, use the following command to download the GNOME Snapshot tool.

      After it has been installed, go to the Raspberry navbar, select the menu, select Accessories, and finally, Screenshot.

      This opens the GNOME Picture window, where you can find three different taking options, as seen below.

      Choose the appropriate capture method and select Capture Image. If you pick the third choice, you will have to use a mouse to choose the location you wish to snip. If you use this option, you will not need a picture editor to resize the snapshot image. The first choice will record the entire screen, while the second will snip the active window.

      GNOME gives you two alternatives once you capture a screen. The first is to save the snapshot, and the other is to copy it to the clipboard. So select it based on your requirements.

      What are the Different Types of Screenshots to know?

      1. Screenshot

      It all begins with a simple screenshot. You don't need any additional programs or software to capture a basic screenshot. At this moment, this feature is built into almost all Raspberry Pi versions and Windows, Mac PCs, and cellphones.

      1. Screen capture

      It is the process of capturing all or a part of the active screen and converting it to a picture or video.

      While it may appear the same thing as a screenshot and a screen capture, they are not the same. A screenshot is simply a static picture. A desktop window capture is a process of collecting something on the screen, such as photographs or films.

      Assume you wish to save a whole spreadsheet. It's becoming a little more complicated now.

      Generally, you would be able to record whatever is on your window. Still, in case you need to snip anything beyond that, such as broad, horizontal spreadsheets or indefinitely lengthy website pages, you'll need to get a screen capture application designed for that purpose. Snagit includes Scrolling snapshot and Panorama Capture capabilities to snap all of the material you need in a single picture rather than stitching together many images.

      1. Animated GIF

      This is a GIF file containing a moving image. An animated succession of picture frames is exhibited.

      While gif Images aren't limited to screen material, they may be a proper (and underappreciated) method to express what's on your display.

      Instead of capturing multiple pictures to demonstrate a process to a person, you may create a single animated Version of what is going on on your computer. These animations have small file sizes, and they play automatically, making them quick and simple to publish on websites and in documents.

      1. Screencast

      This is making a video out of screen material to educate a program or sell a product by displaying functionality.

      If you want to go further than a simple snapshot or even gif Animation, they are a good option. If you have ever looked for help with a software program, you have come across a screencast.

      They display your screen and generally contain some commentary to make you understand what you are viewing.

      Screencasts can range from polished movies used among professional educators to fast recordings showing a coworker how to file a ticket to Information technology. The concept is all the same.

      Three reasons Why Screenshot tool is vital at work

      1. Communicate Effectively

      Using screenshots to communicate removes the guesswork from graphical presentations and saves time.

      The snapshot tool is ideal for capturing screenshots of graphical designs, new websites, or social media posts pending approval.

      1. Demonstrate to Save Time

      This is a must-have tool for anybody working in Information Technology, human resource, or supervisors training new workers. Choose screenshots over lengthy emails, or print screen pictures with instructions. A snapshot may save you a lot of time and improve team communication.

      Furthermore, by preserving the snapshot in Screencast-O-Matic, your staff will be able to retrieve your directions at any time via a shareable link.

      To avoid confusion, utilize screen captures to show. IT supervisors, for instance, can utilize images to teach their colleagues where to obtain computer upgrades. Take a snapshot of the system icon on your desktop, then use the Screen capture Editor to convert your screen capture into a graphical how-to instruction.

      Any image editing tool may be used to improve pictures. You may use the highlighting tool to draw attention to the location of the icons.

      1. Problem Solve and Share

      Everybody has encountered computer difficulties. However, if you can't articulate exactly what has happened, diagnosing the problem afterward will be challenging. It's simple to capture a snapshot of the issue.

      This is useful when talking with customer service representatives. Rather than discussing the issue, email them an image to help them see it. Publish your image immediately to Screencast and obtain a URL to share it. Sharing photos might help you get help quickly.

      It can also help customer support personnel and their interactions with users. They may assist consumers more quickly by sending screenshots or photographs to assist them in resolving difficulties.

      Snapshots are a simple method for social media administrators to categorize, emphasize, or record a specific moment. Pictures are an easy method to keep track of shifting stats or troublesome followers. It might be challenging to track down subscribers who breach social network regulations. Comments and users are frequently lost in ever-expanding discussions.

      Take a snapshot of the problem to document it. Save this image as a file or store it in the screenshots folder of Screencast. Even if people remove their remarks, you will have proof of inappropriate activity.

      Conclusion

      This tutorial taught us how to take screenshots from a Raspberry Pi using different methods. We also went through how to remotely take snapshots on our Pi using SSH and discussed some of the benefits of using the screenshot tool. However, In the following tutorial, we will learn how to use a raspberry pi as a webserver.

      Voice Control Project using Raspberry Pi 4

      Welcome to the next tutorial of our Raspberry Pi programming course. Our previous tutorial taught us to make a button-controlled "music box" that plays different sounds depending on which buttons are pressed. In this lesson, we will configure our raspberry pi for voice control.

      Where To Buy?
      No.ComponentsDistributorLink To Buy
      1BreadboardAmazonBuy Now
      2DC MotorAmazonBuy Now
      3Jumper WiresAmazonBuy Now
      4Raspberry Pi 4AmazonBuy Now

      What will you learn?

      Like the Amazon Echo, voice-activated gadgets are becoming increasingly popular, but you can also construct your own with a Raspberry, a cheap USB mic, and some appropriate software. Simply speaking to your Raspberry Pi will allow you to search YouTube, view websites, activate applications, and even answer inquiries.

      What will you need?

      Because the Raspberry Pi lacks a soundcard or audio port, this project requires a USB microphone or a camera with a built-in microphone. If your mic only has an audio jack, look for an affordable USB soundcard that connects to a USB port on one side and has a headphone and mic output on the other.

      Getting started

      For the Raspberry Pi, there are various speech recognition programs. We're utilizing Steve Hickson's Pi AUI Toolkit for this project since it is powerful and straightforward to set up and operate. You may install a variety of programs using the Pi AUI Suite. The first question is whether or not the dependencies should be installed. These are the files that the Raspberry Pi requires to work with voice commands, so pick Yes, then press Enter to agree.

      Following that, you'll be asked if you wish to download the PlayVideo software, which allows you to open and play video content using voice commands. If you select Y, you'll be prompted to enter the location of your media files, such as /home/pi/Videos. It's worth noting the upper-case letters are crucial in this scenario. The application will tell you if the route is incorrect.

      Next, you'll be asked if you wish to download the Downloader application, which explores the internet for files and downloads them for you automatically. If you select Yes, you will be prompted to enter an address, port, password, and username. If you're not sure, press Return to choose the default settings in each scenario for now.

      Install the Google Texts to Speech Service if you require your raspberry pi to read the contents of the text files. Since it communicates to Google servers to translate text into speech, the Raspberry Pi must be hooked up to the internet to utilize this service.

      You'll require a google user account to install this. The installation requests your username—press Return after completing this step. The Google password is then requested. Return to the previous page and type this.

      You may also use the installer to download Google Voice Commands. This makes use of Google's speech-to-text technology. To proceed, you must enter your Google login and password once again.

      Regardless of whether you choose Google-specific software or not, the program will ask if you wish to download the YouTube scripts. These technologies allow you to say something like "play pi tutorial," An appropriate video clip will be played—press Return after typing a new welcome. You can also enable the silent flag to prevent the Raspberry Pi from responding verbally.

      Lastly, the software installs the Voice command, which includes some of the more helpful scripts, such as the ability to deploy your internet browser by simply saying "internet."

      Basic voice commands used

      Youtube: When you say "YouTube" followed by a title tag, a youtube clip of the first related YouTube clip appears. "I'm feeling lucky" is comparable to "I'm feeling lucky" on google. Say "YouTube" followed by the title of the video you want to watch, for example, "YouTube fluffy kittens."

      Internet: Your internet browser is launched when you use the word "internet." Midori is the internet browser for Rpi by default, but you may alter that.

      Download: When you say "download," followed by a search query, the Pirate Bay webpage searches for the files in demand. For instance, you can say "Download Into the badlands" to get the most current edition of the movie.

      Play: This phrase utilizes the in-built media player to open an audio or video file. For instance, "Play football.mp4" would play the file "football.mp4" from the media directory you chose during setup, like /home/pi/movies.

      Show me: When you say "show me," a directory of your choice appears. The command defaults to not going to a valid root directory, so you'll need to modify your configuration so that it points to one. For instance, show me==/Documents.

      You'll be asked if you want the Voice command to set things up automatically. If an issue occurs at this point, run the following command to download and install the required software.

      Configuring the Raspberry Pi master voice

      After installing the Voice command application, you may want to perform a few basic adjustments to the setup to fine-tune it. Execute the following command from your Raspberry Pi's Terminal or via SSH.

      Following that, you'll be asked several yes/no questions. The first question is whether you wish to enable the continuous flag permanently. The Voice command application, in clear English, asks if you would want to listen to your voice commands constantly every time you launch it.

      For the time being, choose Yes. After that, you'll be asked if you wish the Voice command application to set the verify flag permanently. If you select Y, the application will expect you to pronounce your keyword (the default setting is "Pi") before responding to requests.

      If you like the RPi to monitor continually but not act on all words you say, this is a good option.

      The next step asks if you wish to enable the ignore flag permanently. If Voice command receives a command that isn't expressly listed in the config file, it will try to find and launch a program from your installed apps. For example, if you say "leafpad," a notepad tool, the Voice command looks for it and starts it even if you don't tell it to.

      This is a functionality that we would not recommend anyone to enable. Because you're using Voice command at the SuperUser level, there's a significant danger that you'll accidentally issue a command to the Raspberry Pi that will harm your files. If you wish to set up other programs to function with the Voice command, you can update the configuration file for each scenario.

      The voice command afterward asks if you want to permanently enable the silence flag so that it doesn't respond verbally whenever you talk. As you see fit, select yes or no. After that, you'll be prompted to adjust the default voice recognition timeframe. If Pi is having problems hearing your commands, you should modify this.

      If you select Yes, you'll be prompted to enter a number; this is the number of seconds that the Raspberry Pi will listen for a voice command; the default for RPI is 3. The application then allows you to customize your text-to-speech choices. Before you do this, make sure your volume is turned up. The application attempts to speak something and then asks if you heard it.

      When the system receives your keyword, it responds with "Yes, sir?" as the default response. To modify this, select Yes in the following prompt, then enter your chosen response, for example, "Yes, ma'am?" Once you're finished, hit the enter key. The program will replay the assigned response to check that you are satisfied with the outcome.

      Whenever the program receives an unidentified command, the method is the same as the default response. "Received the wrong command" is set as the default response, but you could still alter it to something more friendly by typing yes, then your desired response, like, "The command is not known."

      You now have the option of configuring the voice recognition parameters. This will check to see if you have a suitable mic. The Pi will then ask you if you want it to test your sound threshold using the Voice command.

      Check for background sound, then press Yes, then press enter key. It then requests you to say a command to ensure that it is using the correct audio device. Type Yes to have the application automatically select the appropriate audio threshold for your Rpi.

      Lastly, the Raspberry Pi will ask if you wish to modify the default voice command term ("Pi"). After that, type Y and your new keyword, then press enter when you're finished.

      After that, you'll be requested to say your keyword to acclimate the RPi to your voice. If everything looks good, press Y to finish the setup. Start with the fundamental commands outlined above when using the Voice command software.

      Once you've mastered these commands, use the following line of code to exit the application and, if desired, change your config file.

      Vexing sounds and how to get rid of them

      The Raspberry Pi's technology is still a work-in-progress, so not everything you speak may be recognized by the program.

      Stay near the mic and talk slowly and clearly to maximize your chances of being heard by the program if you still have difficulties understanding. Launch the terminal or log in through SSH to your Raspberry Pi and type the following command to access your audio settings to change your audio preferences.

      Hit the F4 button on the keyboard to select audio input, then the F6 key to exit. Select your input device, the mic with the arrow up or down keys, then press Enter key. To change the mic's volume, push it up using the up-arrow key to maximum (100).

      If your device isn't identified at all, it may require more current than a universal serial bus port on a Raspberry Pi can supply. A powered universal serial bus hub is the ideal solution for this.

       

      If you have difficulty connecting after installing the Download application, please ensure that connection to The Pirate Bay site is not limited.

      To download the files, you'll also require a BitTorrent application for your RPi, such as transmission. To install this, launch your terminal or access your RPi through SSH and type the following command:

      The Transmission homepage has instructions about getting started and utilizing the application. You should always download files that have permission from the copyright holder.

      Please remember that whatever you speak and any text documents you provide are transferred to Google servers for translation if you use Google text or speech Commands.

      Google maintains that none of this information is kept on its servers. Even if this is the case, any data communicated through the worldwide web can be decrypted by any skilled third party or a hacker. Google, however, encrypts your connection to minimize the risk of this happening.

      If you like this voice command tool, you might want the program to launch automatically every time the Rpi is powered on. If this is the case, launch the terminal from your RPi or access it via SSH and execute the command below:

      The above command opens the script that controls which programs run whenever your Raspberry Pi is booted. By default, this script is doing nothing. Type the following line of code directly above the one reading exit 0:

      To save any changes, use Ctrl+X, enter yes, and press enter key. At this point, you can restart the Raspberry Pi to ensure that it is working.

      Launch your Rpi terminal, type the command below, and press enter to see a list of active processes.

      How can we reduce vexing noise?

      Noise from air conditioners and heaters can damage your audio and make it impossible for the program to understand what you're saying. The only other alternative is to turn these systems off during recording unless the entire system is redesigned to be more acoustically friendly. However, upon listening to the audio, it becomes clear and annoying.

      Computer hardware cooling fans are also sources of mechanical noise. These can be disabled manually and for a limited period. Besides that, try isolating the disturbance in another space or utilizing an isolation box as an alternative.

      Conclusion

      We learned how to configure our raspberry pi for voice control. We also looked at a few basic commands used to control the raspberry pi and the software used. However, In the following tutorial, we will learn how to tweet on Raspberry pi.

      Syed Zain Nasir

      I am Syed Zain Nasir, the founder of <a href=https://www.TheEngineeringProjects.com/>The Engineering Projects</a> (TEP). I am a programmer since 2009 before that I just search things, make small projects and now I am sharing my knowledge through this platform.I also work as a freelancer and did many projects related to programming and electrical circuitry. <a href=https://plus.google.com/+SyedZainNasir/>My Google Profile+</a>

      Share
      Published by
      Syed Zain Nasir