亿迅智能制造网
工业4.0先进制造技术信息网站!
首页 | 制造技术 | 制造设备 | 工业物联网 | 工业材料 | 设备保养维修 | 工业编程 |
home  MfgRobots >> 亿迅智能制造网 >  >> Manufacturing Technology >> 制造工艺

Octopod:智能物联网家庭/工业自动化项目

组件和用品

Arduino UNO
× 1
Arduino MKR WiFi 1010
这个或任何其他 Wifi ESP8266/ESP32 这在我的国家不可用,所以我选择了 NodeMCU ESP8266
× 1
Maxim Integrated MAX32630FTHR
您可以选择 MAX32620FTHR、Arduino 1010 或任何 ESP8266 开发板。使用此开发板您需要外部 WiFi用于Interent的模块或Esp8266芯片
× 1
Raspberry Pi 零无线
你也可以使用普通的 Raspi 2/3!
× 1
DHT11 温湿度传感器(4 针)
× 1
Seeed Grove - 气体传感器(MQ2)
× 1
SparkFun 土壤湿度传感器(带螺丝端子)
× 1
PIR 运动传感器(通用)
可选
× 1
RFID 阅读器(通用)
× 1
中继(通用)
最好是2个频道
× 1
RGB 扩散共阴极
× 1
Raspberry Pi 摄像头模块
× 1
舵机(Tower Pro MG996R)
× 1
蜂鸣器
× 2
HC-05 蓝牙模块
可选
× 1
LED(通用)
× 4
壁挂式适配器/移动电源
× 2
存储卡
超过 4 Gb,最好是 Class 10(Raspberry Pi 操作系统需要)
× 1

必要的工具和机器

热胶枪(通用)
3D 打印机(通用)
可选
手工具
尖嘴钳、剪刀、刀具等

应用和在线服务

Blynk
OpenCV

关于这个项目

有很多物联网自动化项目,但相信我,没有像这样的! Octopod 是使用 NodeMCU 制作的 (MAX32620FTHR 或 Arduino MKR 1010)、Arduino Uno、树莓派 3 . Octopod 让您的家变得更智能。 Octopod 向您发送各种数据,例如温度 , 湿度, 气体质量 在您的家庭/办公室/行业内。 Octopod 向您发送通知 每当它检测到任何类型的运动 并告诉您何时需要给植物浇水 .您还可以控制您的设备 通过 Blynk 智能手机上的应用程序。 Octopod 甚至可以实现真正的情绪照明!

Octopod 配备了一个小小的相机 ,它会向您发送实时供稿 .这款相机还使用了人工智能 检测其视线中的人类并向您发送他们的照片 .此外,它还具有 RFID 门锁 系统 !很棒,对吧?

一切如何运作?

NodeMCU 连接到一堆传感器、一个继电器模块和 RGB LED。它通过 WiFi 连接到智能手机上的 Blynk 应用程序,发送所有数据并允许您控制您的家。

Raspberry Pi 还连接到 WiFi,可让您通过 Pi 摄像头查看实时馈送。我们还在 Pi 上安装了 OpenCV 库,并将 Pi 配置为检测其视线中的任何人并将他们的图像通过电子邮件发送给您。

智能门单元使用 RFID 模块。当允许的 RFID 进入其范围内时,它会自动打开门。

第 1 步:编码主八足

我几乎在每一行都添加了注释,因此您不仅可以复制,还可以理解。在这里,我会简单地告诉你代码执行时到底发生了什么!

  • 包括图书馆:

此代码使用 2 个主要库,Blynk 库使代码与 Blynk 应用程序兼容,另一个库是 DHT11 温度库,它将来自传感器的原始数据转换为温度和湿度。要下载这些库,只需转到代码中的给定链接并下载它们。然后前往 Arduino IDE Sketch → 包含库 → 添加 .zip 库,然后选择您下载的库。

#include //Include Blynk Library#include //Include Blynk Library#include //Include DHT sensor library#define BLYNK_PRINT Serial 

这是一些 Blynk 代码,可帮助您将 nodemcu 连接到互联网,然后对您的应用进行身份验证。

// 您应该在 Blynk 应用程序中获得 Auth Token。// 转到项目设置(坚果图标)。char auth[] ="Your Auth Key";// 您的 WiFi 凭据。// 设置密码为 "" 用于开放网络。char ssid[] ="Your WiFi SSID";char pass[] ="Your WiFi Pass"; 
  • 定义引脚和整数:

在这一部分中,我们定义了各种传感器的引脚。您可以根据自己的意愿更改它们。我们还定义了一些我们在代码过程中倾向于使用的整数。

#define DHTPIN 2 // 温湿度传感器连接什么数字引脚#define SoilPin 4 // 土壤湿度传感器连接什么数字引脚#define gasPin A0 // 气体传感器连接什么模拟引脚to#define pirPin 12 // 什么数字引脚土壤湿度传感器连接到 int pirValue; // 存储读取 PIR Valueint 土壤值的地方; // 存放读取的土壤水分值int PIRpinValue; // 存放 Blynk App Pin V0int SOILpinValue 发送的值的地方; // 存放 Blynk App Pin V1 发送的值的地方 
  • BLYNK_WRITE() :

使用此代码,我们告诉 Blynk 应用它可以使用 Pin V0 和 Pin V1 来告诉代码是否打开了运动检测和土壤湿度测试。

BLYNK_WRITE(V0) //来自 Blynk 应用程序的 VO 引脚告诉运动检测是否开启{ PIRpinValue =param.asInt(); } BLYNK_WRITE(V1) //来自 Blynk 应用程序的 V1 引脚告诉土壤湿度是否开启{ SOILpinValue =param.asInt(); } 
  • void sendSensor() :

该代码从DHT11获取数据使其可用,然后分别发送到Pin V5和V6。

void sendSensor(){ int h =dht.readHumidity(); int t =dht.readTemperature(); // 或 dht.readTemperature(true) for Fahrenheit if (isnan(h) || isnan(t)) { Serial.println("从 DHT 传感器读取失败!"); // 检查传感器是否没有发送任何假值 return; } // 您可以随时发送任何值。 // 请不要每秒发送超过 10 个值。 Blynk.virtualWrite(V5, h); // 将湿度发送到引脚 V5 Blynk.virtualWrite(V6, t); // 将温度发送到引脚 V7} 
  • void getPirValue() &void getSoilValue() :

从传感器读取数字值,然后运行 ​​if-else 条件以检查传感器的状态。如果传感器处于必需状态,它会从 Blynk 应用程序推送通知。

void getPirValue(void){ pirValue =digitalRead(pirPin); if (pirValue) // PIR 的数字引脚在人体检测上给出高值 { Serial.println("Motion detection"); Blynk.notify("检测到运动"); }}void getSoilValue(void){oilValue =digitalRead(soilPin); if (soilValue ==HIGH) //当湿度较小时,土壤传感器的数字引脚给出低值 { Serial.println("Water Plants"); Blynk.notify("水厂"); }} 
  • 无效设置():

在设置中,我们做了一些只能做一次的事情。像:以固定的波特率开始串行通信,将此代码授权给 Blynk 应用程序,开始 Dht 传感器读数,然后在推特上发送您的智能家居项目在线,然后告诉节点 Pir Pin 和土壤传感器Pin 仅用于输入。

void setup(){ // 调试控制台 Serial.begin(9600); Blynk.begin(auth, ssid, pass); // 也可以指定服务器: //Blynk.begin(auth, ssid, pass, "blynk-cloud.com", 8442); //blynk.begin(auth, ssid, pass, IPAddress(192,168,1,100), 8442); dht.begin(); // 开始 DHT 读取 Blynk.tweet("OCTOPOD IS ONLINE!"); // 在你的 Twitter 句柄上推特你的项目是在线 pinMode(pirPin,INPUT); // 定义 Pir Pin 是为了接受 Input Only pinMode(soilPin,INPUT); // 定义土壤传感器引脚意味着只接受输入 // 设置一个函数,每秒钟调用一次 timer.setInterval(1000L, sendSensor);} 
  • 空循环():

在循环中,我们写了一遍又一遍地完成的事情。在这里,我们确保我们在安装之前编写的代码运行。然后,我们编写了 2 个 If-Else 语句来检查引脚 V0 和引脚 V1 的状态,然后相应地从传感器中获取值。

void loop(){ Blynk.run();定时器运行(); if (PIRpinValue ==HIGH) //来自 Blynk 应用程序的 VO 引脚告诉运动检测是否开启 { getPirValue(); } if (SOILpinValue ==HIGH) //来自 Blynk 应用程序的 V1 引脚告诉土壤湿度是否开启 { getSoilValue(); } } 

第 2 步:对 RFID 智能锁进行编码

老实说,这是一个简单易行的代码,不需要太多解释。但是,我仍然会简单地告诉你这是做什么的。该代码有两种版本,一种是如果您想将门单元盒连接到蓝牙,以便它通过串行终端告诉您门何时打开。其他发送到串行,因此如果您将 Arduino 连接到计算机,则可以查看它。不过我更喜欢没有蓝牙版本的简单。那么我们开始吧!

  • 转到 Sketch → 包含库 → 管理库 → 在搜索栏中键入 MFRC522 并安装库。然后转到文件 → 示例 → 自定义库 → MFRC522 → dumpInfo Sketch。在开始时,您可以阅读如何连接引脚(或参考图片)。然后运行代码并打开串行监视器并将一张 Rfid 卡放在 MFRC522 模块前面并等待 5 秒钟。然后,以类似方式记下卡片 UID,记下其他卡片和钥匙链的 UID。
  • 然后下载您喜欢的任何代码。打开代码并转到这一行。在这里代替这些 X 添加您想用来开门的卡的 UID。现在您已准备就绪,只需上传代码即可。
if (content.substring(1) =="XX XX XX XX") {} 

在这段代码中,我们主要做了两件事,即代码的 If-Else 部分。如果我们告诉 arduino,如果卡的 UID 与提到的 UID 匹配,则使伺服器移动(以便门打开)并闪烁一些 LED 并使用蜂鸣器发出一些声音。否则,如果 UID 不使某些 LED 闪烁并使用蜂鸣器发出一些声音。

第 3 步:Raspberry Pi 人体检测 AI 设置

在这个指导步骤中,我们将学习如何制作智能安全摄像头。摄像头在检测到物体时会向您发送一封电子邮件,如果您在同一个 WiFi 网络上,您可以通过输入 Raspberry Pi 的 IP 地址访问摄像头的实时镜头。我将向您展示如何从头开始创建智能相机。走吧!

要求:

1. OpenCV(开源计算机视觉库)

2. 树莓派 3B

3. 树莓派相机 V2

假设:

1. 安装了 Raspbian Stretch 的 Raspberry Pi 3。如果您还没有 Raspbian Stretch 操作系统,则需要升级您的操作系统以利用 Raspbian Stretch 的新功能。

要将您的 Raspberry Pi 3 升级到 Raspbian Stretch,您可以在此处下载并按照这些升级说明(或这些适用于初学者的 NOOBS 路线)。

注意:如果您将 Raspberry Pi 3 从 Raspbian Jessie 升级到 Raspbian Stretch,则可能会出现问题。自行承担风险,并咨询 Raspberry Pi 论坛寻求帮助。重要提示:我建议您重新安装 Raspbian Stretch!不建议从 Raspbian Jessie 升级。

2. 物理访问您的 Raspberry Pi 3,以便您可以打开终端并执行命令通过 SSH 或 VNC 远程访问。我将通过 SSH 完成本教程的大部分内容,但只要您可以访问终端,就可以轻松地学习。

  • 步骤 1:将相机连接到 RASPBERRY PI 3

1. 打开您的 Raspberry Pi 相机模块。请注意静电可能会损坏相机。从灰色防静电袋中取出相机之前,请确保您已通过接触接地物体(例如散热器或 PC 机箱)使自己放电。

2. 通过将电缆插入 Raspberry Pi 来安装 Raspberry Pi 摄像头模块。电缆插入位于以太网和 HDMI 端口之间的连接器,银色连接器面向 HDMI 端口。

3. 启动你的树莓派。

4. 根据提示,运行“sudo raspi-config”。如果未列出“相机”选项,您将需要运行一些命令来更新您的 Raspberry Pi。运行“sudo apt-get update”和“sudo apt-get upgrade”

5. 再次运行“sudo raspi-config”——您现在应该会看到“相机”选项。

命令-

$ sudo raspi-config 

6. 导航到“相机”选项,并启用它(查看接口选项)。选择“完成”并重新启动你的树莓派或输入以下内容:

$ sudo reboot 
  • 第 2 步:打开简历安装

如果这是您第一次安装 OpenCV 或者您刚刚开始使用 Rasbian Stretch。这是适合您的完美教程。

第 1 步:扩展文件系统

您使用的是全新安装的 Raspbian Stretch 吗?如果是这样,您应该做的第一件事是扩展您的文件系统以包含 micro-SD 卡上的所有可用空间:

命令-

$ sudo raspi-config 

然后选择“高级选项”菜单项,然后选择“扩展文件系统”。一旦出现提示,您应该选择第一个选项,“A1。展开文件系统”,按键盘上的 Enter,向下箭头到“”按钮,然后重新启动您的 Pi。如果您使用的是 8GB 卡,您可能会使用接近 50% 的可用空间,因此一件简单的事情就是删除 LibreOffice 和 Wolfram 引擎以释放 PI 上的一些空间:

命令-

$ sudo apt-get purge wolfram-engine $ sudo apt-get purge libreoffice* $ sudo apt-get clean$ sudo apt-get autoremove 

移除 Wolfram 引擎和 LibreOffice 后,您可以回收近 1GB!

第 2 步: 安装依赖

这不是我第一次讨论如何在 Raspberry Pi 上安装 OpenCV,所以我将这些说明保留在简短的一边,让您可以完成安装过程:我还包括了时间它需要执行每个命令(有些取决于您的 Internet 速度),因此您可以相应地计划 OpenCV + Raspberry Pi 3 的安装(OpenCV 本身大约需要 4 小时来编译——稍后会详细介绍)。第一步是更新和升级任何现有的包:

命令-

$ sudo apt-get update &&sudo apt-get upgrade  

然后我们需要安装一些开发者工具,包括 CMake,它可以帮助我们配置 OpenCV 构建过程: Raspbian Stretch:在你的 Raspberry Pi 上安装 OpenCV 3 + Python

命令-

$ sudo apt-get install build-essential cmake pkg-config 

接下来,我们需要安装一些图像 I/O 包,允许我们从磁盘加载各种图像文件格式。此类文件格式的示例包括 JPEG、PNG、TIFF 等:Raspbian Stretch

命令-

$ sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev 

正如我们需要图像 I/O 包一样,我们也需要视频 I/O 包。这些库允许我们从磁盘读取各种视频文件格式以及直接处理视频流

命令-

$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev $ sudo apt-get install libxvidcore-dev libx264-dev 

OpenCV 库带有一个名为 highgui 的子模块,用于在我们的屏幕上显示图像并构建基本的 GUI。为了编译highgui模块,我们需要安装GTK开发库:Raspbian Stretch:在你的树莓派上安装OpenCV 3 + Python

命令-

$ sudo apt-get install libgtk2.0-dev libgtk-3-dev 

OpenCV 内部的许多操作(即矩阵操作)可以通过安装一些额外的依赖项来进一步优化:

命令-

$ sudo apt-get install libatlas-base-dev gfortran 

这些优化库对于资源受限的设备(例如 Raspberry Pi)尤其重要。最后,让我们安装 Python 2.7 和 Python 3 头文件,以便我们可以使用 Python 绑定编译 OpenCV:Raspbian Stretch

命令-

$ sudo apt-get install python2.7-dev python3-dev  

如果您正在使用全新安装的操作系统,则这些版本的 Python 可能已经是最新版本(您将看到一条终端消息说明这一点)。如果跳过此步骤,您可能会注意到在运行 make 编译 OpenCV 时找不到与 Python.h 头文件相关的错误。第三步:下载OpenCV源代码

第 3 步:下载 OpenCV 源代码

现在我们已经安装了依赖项,让我们从官方 OpenCV 存储库中获取 OpenCV 的 3.3.0 存档。此版本包含我们在上一篇使用 OpenCV 进行深度学习的文章中讨论的 dnn 模块(注意:随着未来版本的 openCV 发布,您可以将 3.3.0 替换为最新版本号):

命令-

$ cd ~ $ wget -O opencv.zip   https://github.com/Itseez/opencv/archive/3.3.0.zi...>> p$ 解压 opencv.zip

我们需要完整安装 OpenCV 3(例如访问 SIFT 和 SURF 等功能),因此我们还需要获取 opencv_contrib 存储库:Raspbian Stretch:在您的 Raspberry Pi 上安装 OpenCV 3 + Python

命令-

$ wget -O opencv_contrib.zip   https://github.com/Itseez/opencv_contrib/archive/...>>3.3.0$ 解压 opencv_contrib.zip 

在复制和粘贴过程中,您可能需要使用“<=>”按钮展开上面的命令。 3.3.0.zip 中的 .zip 在某些浏览器中可能会被截断。 OpenCV 3.3.0 存档的完整 URL 是:https://github.com/Itseez/opencv_contrib/archive/... 注意:确保您的 opencv 和 opencv_contrib 版本相同(在本例中为 3.3.0) .如果版本号不匹配,那么您可能会遇到编译时或运行时错误。第 4 步:Python 2.7 还是 Python 3?在我们开始在 Raspberry Pi 3 上编译 OpenCV 之前,我们首先需要安装 pip,一个 Python 包管理器

命令-

$ wget    https://bootstrap.pypa.io/get-pip.py> 
>>
>$ sudo python get-pip.py $ sudo python3 get-pip.py 

发出这些命令时,您可能会收到 pip 已经是最新的消息,但最好不要跳过此步骤。如果您是 PyImageSearch 的长期读者,那么您就会知道我是 virtualenv 和 virtualenvwrapper 的忠实粉丝。

安装这些包不是必需的,你绝对可以在没有它们的情况下安装 OpenCV,但话虽如此,我强烈建议你安装它们,因为其他现有的 PyImageSearch 教程(以及未来的教程)也利用了 Python 虚拟环境。

我还将假设您在本指南的其余部分都安装了 virtualenv 和 virtualenvwrapper。那么,鉴于此,使用 virtualenv 和 virtualenvwrapper 的意义何在?首先,重要的是要了解虚拟环境是一种特殊工具,用于通过为每个项目创建隔离的、独立的 Python 环境,将不同项目所需的依赖项保存在不同的位置。简而言之,它解决了“Project X 依赖 1.x 版本,但 Project Y 需要 4.x”的困境。

它还可以使您的全球站点包整洁、整洁且没有杂乱。如果您想完整解释为什么 Python 虚拟环境是好的做法,请务必阅读这篇关于 RealPython 的优秀博客文章。 Python 社区的标准做法是使用某种虚拟环境,因此我强烈建议您也这样做:

命令-

$ sudo pip install virtualenv virtualenvwrapper$ sudo rm -rf ~/.cache/pip  

现在已经安装了 virtualenv 和 virtualenvwrapper,我们需要更新我们的 ~/.profile 文件。在文件底部包含以下几行:Raspbian Stretch

命令-

$ nano ~/.profile 

将以下几行复制并粘贴到文件底部:

命令-

# virtualenv 和 virtualenvwrapperWORKON_HOME=$HOME/.virtualenvs source /usr/local/bin/virtualenvwrapper.sh  

您应该简单地使用 cat 和输出重定向来处理更新 ~/.profile :

命令-

$ echo -e "\n# virtualenv and virtualenvwrapper">> ~/.profile $ echo "exportWORKON_HOME=$HOME/.virtualenvs">> ~/.profile$ echo "source /usr/local/bin/virtualenvwrapper.sh">> ~/.profile  

Now that we have our ~/.profile updated, we need to reload it to make sure the changes take affect. You can force a reload of your ~/.profile file by:Logging out and then logging back in.

Closing a terminal instance and opening up a new one

Or my personal favourite

COMMAND-

$ source ~/.profile  

Note :I recommend running the source ~/.profile file each time you open up a new terminal to ensure your system variables have been setup correctly. Creating your Python virtual environment Next, let’s create the Python virtual environment that we’ll use for computer vision development:

COMMAND-

$ mkvirtualenv cv -p python2  

This command will create a new Python virtual environment named cv using Python 2.7.

If you instead want to use Python 3, you’ll want to use this command instead:

COMMAND-

$ mkvirtualenv cv -p python3 

Again, I can’t stress this point enough:the cv Python virtual environment is entirely independent and sequestered from the default Python version included in the download of Raspbian Stretch.

Any Python packages in the global site-packages directory will not be available to the cv virtual environment. Similarly, any Python packages installed in site-packages of cv will not be available to the global install of Python.

Keep this in mind when you’re working in your Python virtual environment and it will help avoid a lot of confusion and headaches. How to check if you’re in the “cv” virtual environment If you ever reboot your Raspberry Pi; log out and log back in; or open up a new terminal, you’ll need to use the workon command to re-access the cv virtual environment.

In previous blog posts, I’ve seen readers use the mkvirtualenv command — this is entirely unneeded! Themkvirtualenv command is meant to be executed only once:to actually create the virtual environment. After that, you can use workon and you’ll be dropped down into your virtual environment:

COMMAND-

$ source ~/.profile $ workon cv 

To validate and ensure you are in the cv virtual environment, examine your command line — if you see the text (cv) preceding your prompt, then you are in the cv virtual environment:Make sure you see the “(cv)” text on your prompt, indicating that you are in the cv virtual environment.

Otherwise, if you do not see the (cv) text, then you are not in the cv virtual environment:

If you do not see the “(cv)” text on your prompt, then you are not in the cv virtual environment and need to run “source” and “workon” to resolve this issue. To fix this, simply execute the source and workon commands mentioned above. Installing NumPy on your Raspberry Pi Assuming you’ve made it this far, you should now be in the cv virtual environment (which you should stay in for the rest of this tutorial).

Step #4 :Installing NumPy on your Raspberry Pi

Our only Python dependency is NumPy, a Python package used for numerical processing:

COMMAND-

$ pip install numpy  

the NumPy installation can take a bit of time.

Step #5:Compile and Install OpenCV

COMMAND-

$ workon cv 

Once you have ensured you are in the cv virtual environment, we can setup our build using CMake:

COMMAND-

$ cd ~/opencv-3.3.0/ $ mkdir build $ cd build $ cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \ -D BUILD_EXAMPLES=ON .. 

Now, before we move on to the actual compilation step, make sure you examine the output of CMake! Start by scrolling down the section titled Python 2 and Python 3 . If you are compiling OpenCV 3 for Python 2.7, then make sure your Python 2 section includes valid paths to the Interpreter, Libraries, numpy and packages

Checking that Python 3 will be used when compiling OpenCV 3 for Raspbian Stretch on the Raspberry Pi 3. Notice how the Interpreter points to our python2.7 binary located in the cv virtual environment. The numpy variable also points to the NumPy installation in the cv environment.

Again, the Interpreter points to our python3.5 binary located in the cv virtual environment while numpy points to our NumPy install.

In either case, if you do not see the cv virtual environment in these variables paths, it’s almost certainly because you are NOT in the cv virtual environment prior to running CMake! If this is the case, access the cv virtual environment using workon cv and re-run the cmake command outlined above.

Configure your swap space size before compiling Before you start the compile process, you should increase your swap space size. This enables OpenCV to compile with all four cores of the Raspberry PI without the compile hanging due to memory problems.

Open your /etc/dphys-swapfile and then edit the CONF_SWAPSIZE variable

COMMAND-

$ nano /etc/dphys-swapfile  

and then edit the following section of the file:#set size to absolute value, leaving empty (default) then uses computed value # you most likely don't want this, unless you have an special disk situation

# CONF_SWAPSIZE=100 CONF_SWAPSIZE =1024 

Notice that I’ve commented out the 100MB line and added a 1024MB line. This is the secret to getting compiling with multiple cores on the Raspbian Stretch. If you skip this step, OpenCV might not compile.

To activate the new swap space, restart the swap service:

COMMAND-

$ sudo /etc/init.d/dphys-swapfile stop $ sudo /etc/init.d/dphys-swapfile start  

Note:It is possible to burn out the Raspberry Pi microSD card because flash memory has a limited number of writes until the card won’t work. It is highly recommended that you change this setting back to the default when you are done compiling and testing the install (see below). To read more about swap sizes corrupting memory, see this page. Finally, we are now ready to compile OpenCV:

COMMAND-

$ make -j4 

The -j4 switch stands for the number of cores to use when compiling OpenCV. Since we are using a Raspberry Pi 2, we’ll leverage all four cores of the processor for a faster compilation.

However, if your make command errors out, I would suggest starting the compilation over again and only using one core

$ make clean$ make  

Once OpenCV 3 has finished compiling.Our OpenCV 3 compile on Raspbian Stretch has completed successfully.

From there, all you need to do is install OpenCV 3 on your Raspberry Pi 3:

COMMAND-

$ sudo make install$ sudo ldconfig  

Step #6 :Finish installing OpenCV on your Pi

We’re almost done — just a few more steps to go and you’ll be ready to use your Raspberry Pi 3 with OpenCV 3 on Raspbian Stretch.

For Python 2.7:

#5 Provided your Step without error, OpenCV should now be installed in/usr/local/lib/python2.7/site-pacakges . You can verify this using the ls command:

COMMAND-

$ ls -l /usr/local/lib/python2.7/site-packages/ total 1852 -rw-r--r-- 1 root staff 1895772 Mar 20 20:00 cv2.so  

Note:In some cases, OpenCV can be installed in /usr/local/lib/python2.7/dist-packages(note the dist-packages rather than site-packages ). If you do not find the cv2.so bindings insite-packages , we be sure to check dist-packages . Our final step is to sym-link the OpenCV bindings into our cv virtual environment for Python 2.7:

COMMAND-

$ cd ~/.virtualenvs/cv/lib/python2.7/site-packages/ $ ln -s /usr/local/lib/python2.7/site-packages/cv2.so cv2.so  

For Python 3:After running make install , your OpenCV + Python bindings should be installed in/usr/local/lib/python3.5/site-packages . Again, you can verify this with the ls command:

COMMAND-

$ ls -l /usr/local/lib/python3.5/site-packages/ total 1852 -rw-r--r-- 1 root staff 1895932 Mar 20 21:51 cv2.cpython-34m.so  

I honestly don’t know why, perhaps it’s a bug in the CMake script, but when compiling OpenCV 3 bindings for Python 3+, the output .so file is named cv2.cpython-35m-arm-linux-gnueabihf.so(or some variant of) rather than simply cv2.so (like in the Python 2.7 bindings). Again, I’m not sure exactly why this happens, but it’s an easy fix. All we need to do is rename the file:

COMMAND-

$ cd /usr/local/lib/python3.5/site-packages/ $ sudo mv cv2.cpython-35m-arm-linux-gnueabihf.so cv2.so  

After renaming to cv2.so , we can sym-link our OpenCV bindings into the cv virtual environment

for Python 3.5:

COMMAND-

$ cd ~/.virtualenvs/cv/lib/python3.5/site-packages/ $ ln -s /usr/local/lib/python3.5/site-packages/cv2.so cv2.so  

Step #7:Testing your OpenCV 3 install

Congratulations, you now have OpenCV 3 installed on your Raspberry Pi 3 running Raspbian Stretch! But before we pop the champagne and get drunk on our victory, let’s first verify that your OpenCV installation is working properly.

Open up a new terminal, execute the source and workon commands, and then finally attempt to import the Python + OpenCV bindings:

COMMAND-

$ source ~/.profile$ workon cv $ python>>> import cv2>>> cv2.__version__ '3.3.0'>>>  

OpenCV 3 has been successfully installed on my Raspberry Pi 3 + Python 3.5 environment . Once OpenCV has been installed, you can remove both the opencv-3.3.0 and opencv_contrib-3.3.0 directories to free up a bunch of space on your disk:

COMMAND-

$ rm -rf opencv-3.3.0 opencv_contrib-3.3.0  

However, be cautious with this command! Make sure OpenCV has been properly installed on your system before blowing away these directories. A mistake here could cost you hours in compile time.

Open your /etc/dphys-swapfile and then edit the CONF_SWAPSIZE variable COMMAND-

# set size to absolute value, leaving empty (default) then uses computed value# you most likely don't want this, unless you have an special disk situation CONF_SWAPSIZE=100 #CONF_SWAPSIZE=1024 

Notice that I’ve commented out the 1024MB line and uncommented the 100MB line. As stated above, larger swap spaces may lead to memory corruption, so I recommend setting it back to 100MB. If you skip this step, your memory card won’t last as long. To revert to the smaller swap space, restart the swap service

COMMAND-

$ sudo /etc/init.d/dphys-swapfile stop
$ sudo /etc/init.d/dphys-swapfile start
  • STEP 4:Setting Up Python Program

You can verify that the camera works by running.

COMMAND-

$ raspistill -o image.jpg 

which will save a image from the camera in your current directory.

After you checked the camera is working. Now download all the python files and models from below link :

LINK:

https://drive.google.com/file/d/0B98uoD6BbkpqZU9FT...

You can open up the file inspector and view the image.

Make sure you are using the virtual environment by typing the following commands:

COMMANDS-

$ source ~/.profile $ workon cv  

Next, navigate to the repository directory,

COMMANDS-

$ cd Smart-Security-Camera  

and install the dependencies for the project

COMMANDS-

$ pip install -r requirements.txt  

To get emails when objects are detected, you'll need to make a couple modifications to the mail.py file. Open mail.py with vim vim mail.py , then press i to edit. Scroll down to the following section

# Email you want to send the update from (only works with gmail)fromEmail ='[email protected]' fromEmailPassword ='password1234' # Email you want to send the update to toEmail ='[email protected]'  

and replace with your own email/credentials.

The mail.py file logs into a gmail SMTP server and sends an email with an image of the object detected by the security camera. Press esc then ZZ to save and exit.

You can also modify the main.py file to change some other properties.

email_update_interval =600 # sends an email only once in this time intervalvideo_camera =VideoCamera(flip=True) # creates a camera object, flip verticallyobject_classifier =cv2.CascadeClassifier("models/fullbody_recognition_model.xml") # an opencv classifier facial_recognition_model.xml fullbody_recognition_model.xml upperbody_recognition_model.xml 

Run the program python main.py

You can view a live stream by visiting the IP address of your pi in a browser on the same network. You can find the IP address of your Raspberry Pi by typing ifconfig in the terminal and looking for the inet address. Visit :5000 in your browser to view the stream.

Note:

To view the live stream on a different network than your Raspberry Pi, you can use ngrok to expose a local tunnel. Once downloaded, run ngrok with ./ngrok http 5000 and visit one of the generated links in your browser. Note:The video stream will not start automatically on startup. To start the video stream automatically, you will need to run the program from your /etc/rc.local file see this video for more information about how to configure that. Receiving Emails When receiving an email for the first time, you might get the following notification from Google:

By default, Google blocks apps from using SMTP without permissions. We can solve this by clicking on the allow "less secure apps" link and toggle the feature on. The next object detected will send an email.

STEP 4:Blynk App Interface Setup

This is one of the easiest and fun steps.让我们开始吧。 Shall we?

  • Downloading the Blynk App is the first obvious step. Download it from App Store or Google Play Store. Sign Up or Login in the app to get started.
  • Click on New Project to create a new project. Name it whatever you like. In devices Choose NodeMCU. In connection type choose WiFi and click on Create.
  • Now you will get a Auth key on your Email. Make sure to copy that and add that to your code.
  • Now click on the + sign to add widgets. You may need to buy some energy!
  • Now add three Gauge's. Click on of the Gauge's, name it Temperature. Choose a color of you choice for this gauge. In the pin choose Virtual Pin V6. Set the range from 0 to 50 °C ( not sure for °F), make the label °C/ °F and keep the reading rate to Push.
  • Repeat this for other two Gauges using data as shown in the pictures.
  • Now, add a zeRGBa and set R to digital Pin GP15, B to GP3 and B to GP1.
  • Now add 4 Buttons, change there colors accordingly. Set them as shown in the pictures.
  • Add a SuperChart, add 3 data streams Temperature, Humidity and gas, set there colors, there pins, Range and Suffix.
  • Now, add tabs. Go to the second tab and add Twitter, Notifications, Email and Eventor. In Twitter add you twitter username and password. In Notifications, Switch on Notify when hardware goes off. In Email, set your Email address. In Eventor you can set many triggers, see the picture for the triggers that I have set up.
  • You are done. now click on the play button to use the interface that you have created. You can change the interface as you like. It is really simple and fun process!

STEP 5:Making Octopod Structure

Warning - This is going to be one of most time-consuming process!

NOTE:You can skip this step and 3D print the enclosure that I have provided!

Actually, this step is optional yet the most important step! You can just take a shoe box and avoid all of this hard work. But on the contrary, this hard work makes the project unique. The idea for this unique design striked me while, I was doing my math homework. This shape is inspired from an octagon. Rather, This is a 3D octagon! So let's get started!

Making the Structure:

  • Take your cardboard and draw a rectangle of 9 cm x 9.5 cm (You can change the dimensions as per your convince). Now, joining end to end draw 4 of theses similar rectangles (8 if your cardboard is long enough).
  • Now make partial cuts (somewhat like paper creases) in between these rectangles and cut out this whole long piece. Repeat this process until you have 4 of these long pieces.
  • Now, using a D draw a 135° and cut it out as shown in the images. Make 16 of these same angles.
  • Using Glue gun glue these angles in between the small pieces. Repeat this for all the joints.
  • Now using glue gun join 2 of these open structures to make a closed structure (somewhat octagon) .
  • Now glue the other two open structure perpendicularly, making a 3-D shape.
  • Now Cut 4 More pieces of of 9 x 9.5 cm and glue them in between all the empty spaces.
  • Now you will be left with only 8 open triangles. Using Acrylic Butter paper cut 8 triangles, which will fit on these open areas, but don't glue them now.

Paint Job:

For this you need to head out towards an open area! Wear your Mask and Gloves and just make one coat over the structure that you have created. You can go to Youtube, to learn proper ways to make a perfect coat. Now let this dry for 4- 5 Hours and then apply 1 more coat. I think 3 coats will be good for this.

That's it! You have arrived with a unique piece of art.

STEP 6:Making Door Lock Enclosure

Really simple. Just take a Shoe Box and apply 2- 3 even coats of spray. And maybe for design make check pattern using duck tape like we did!

STEP 7:Assembling the Octopod

I have tried to make this step as simple as possible, by making the schematic diagram. Refer the picture or the file, and make connections accordingly. I will briefly explain all the connections!

  • We have connected the nodeMCU to a large size Solderless Breadboard. We have also connected Breadboard power supply on the breadboard. NodeMCU, all the sensors, LED's and other components are powered by the power supply.
  • Following are the sensor connections:DHT11D4 / GP2 MQ2A0 / adc00 Soil Moisture SensorD2 / GP4 PIRD6 / GP1 RGB R → D8 / GP15, G → Rx / GP3, B → Tx / GP1 RelayLn1D0 / GP16, Ln2D5 / GP14 Make the following connections.
  • For powering this you can use Power Bank or a Wall Adapter, which will be connected to the breadboard Power supply.
  • Now, take your Raspberry Pi along with the Raspberry Pi Camera. Make a small hole in one of the walls of the cardboard and glue or tape your Raspberry Camera.

Now, insert all these electronics inside, through any open triangle and close the triangle by using the cut outs of acrylic butter paper that we made. It is better to leave 1 or 2 open, in case something inside needs to be fixed! Also let the prongs of the Soil Moisture Sensor sit outside.

All done! We have completed the making process of the Octopod! Now, Just switch On the power and keep your Octopod over the dining Table or maybe on the Guest's Table and enjoy! To see the live feed from the Raspberry Pi, just open a browser and put in the IP address of the Pi. Enjoy!

STEP 8:Assembling the Door Lock

After uploading the code on your Arduino just make the connections as shown in the picture or in fritzing file! It is quite simple. Then take the handy shoe box that we made make 2 holes in it for the LED's to pop out. and allow the servo to stand out. These days mechanical locks like servo's are also available in the market, though servo works just as fine. This is just an experiment, so please so please don't use this as you actual lock! Glue the Rfid to one wall of the box and also glue the small breadboard and Arduino. You can use a wall adapter or a Power bank to power this. Just power the Arduino and you will be good to go! Done!

CONCLUSION:

This was a really fun project to do!

I was really happy how it turned out. I think the lights look really well, and i think its well worth making even one of these as a side lamp. I really can't thank creators of Blynk and OpenCV libraries enough, they are both really excellent pieces of software and made this project possible! As always, if you have any questions on any part of the process please let me know and I will be happy to try help. Thanks a lot! And Please Vote for Us!

-Saksham

Arduino Blog

Full Instructable

UPDATE:

I have been working on integrating this system with Amazon Alexa, I am almost done. Will upload the code in 2-3 days!

代码

  • octopod.ino
  • octopod_door.ino
  • octopod_door_bluetooth.ino
octopod.inoArduino
This is the main code for Arduino MKR 1010 (NodeMCU in my case)
If you are using MAX32620FTHR, download libraries for it. Then change the board in board settings. Also change the pin as given bellow
ESP MAX
AO - - - - - - - - - - - - GPIO2
A1 - - - - - - - - - - - - - GPIO1

Soil Sensor MAX
analog sensor - - - GPIO3

Gas Sensor (MQ2) MAX
sensor - - - - - - - - - - - GPIO4

PIR Sensor MAX
sensor - - - - - - - - - - - -GPIO0

Relay MAX
1 - - - - - - - - - - - - - - - - - M0
2 - - - - - - - - - - - - - - - - - M1
/*************************************************************************** OCTOPOD:A SMART HOME AUTOMATION PROJECT MADE BY SAKSHAM Download latest Blynk library here:https://github.com/blynkkk/blynk-library/releases/latestDownload latest DHT Sensor library here:https://github.com/adafruit/DHT-sensor-library***************************************************************************/#include  //Include ESP8266 Wifi Library#include  //Include Blynk Library#include  //Include DHT sensor library#define BLYNK_PRINT Serial// You should get Auth Token in the Blynk App.// Go to the Project Settings (nut icon).char auth[] ="Your Blynk Auth Key";// Your WiFi credentials.// Set password to "" for open networks.char ssid[] ="Your WiFi SSID";char pass[] ="Your WiFi Password";#define DHTPIN 2 // What digital pin temperature and humidity sensor is connected to#define soilPin 4 // What digital pin soil moisture sensor is connected to#define gasPin A0 // What analog pin gas sensor is connected to#define pirPin 12 // What digital pin soil moisture sensor is connected to int pirValue; // Place to store read PIR Valueint soilValue; // Place to store read Soil Moisture Valueint PIRpinValue; // Place to store the value sent by Blynk App Pin V0int SOILpinValue; // Place to store the value sent by Blynk App Pin V1// Uncomment whatever type you're using!#define DHTTYPE DHT11 // DHT 11//#define DHTTYPE DHT22 // DHT 22, AM2302, AM2321//#define DHTTYPE DHT21 // DHT 21, AM2301DHT dht(DHTPIN, DHTTYPE);BlynkTimer timer;// This function sends Arduino's up time every second to Virtual Pin (5).// In the app, Widget's reading frequency should be set to PUSH. This means// that you define how often to send data to Blynk App.BLYNK_WRITE(V0) //VO pin from Blynk app tells if Motion Detection is ON{ PIRpinValue =param.asInt(); } BLYNK_WRITE(V1) //V1 pin from Blynk app tells if Soil Moisture is ON{ SOILpinValue =param.asInt(); } void sendSensor(){ int h =dht.readHumidity(); int t =dht.readTemperature(); // or dht.readTemperature(true) for Fahrenheit if (isnan(h) || isnan(t)) { Serial.println("Failed to read from DHT sensor!"); // to check if sensor is not sending any false values return; } // You can send any value at any time. // Please don't send more that 10 values per second. Blynk.virtualWrite(V5, h); // send humidity to pin V5 Blynk.virtualWrite(V6, t); // send temperature to pin V7}void getPirValue(void){ pirValue =digitalRead(pirPin); if (pirValue) //digital pin of PIR gives high value on human detection { Serial.println("Motion detected"); Blynk.notify("Motion detected"); }}void getSoilValue(void){ soilValue =digitalRead(soilPin); if (soilValue ==HIGH) //digital pin of soil sensor give low value when humidity is less { Serial.println("Water Plants"); Blynk.notify("Water Plants"); }}void setup(){ // Debug console Serial.begin(9600); Blynk.begin(auth, ssid, pass); // You can also specify server://Blynk.begin(auth, ssid, pass, "blynk-cloud.com", 8442); //Blynk.begin(auth, ssid, pass, IPAddress(192,168,1,100), 8442); dht.begin(); // Begins DHT reading Blynk.tweet("OCTOPOD IS ONLINE! "); // Tweating on your Twitter Handle that you project is online pinMode(pirPin,INPUT); // Defining that Pir Pin is meant to take Input Only pinMode(soilPin,INPUT); // Defining that Soil Sensor Pin is meant to take Input Only // Setup a function to be called every second timer.setInterval(1000L, sendSensor);}void loop(){ Blynk.run(); timer.run(); if (PIRpinValue ==HIGH) //VO pin from Blynk app tells if Motion Detection is ON { getPirValue(); } if (SOILpinValue ==HIGH) //V1 pin from Blynk app tells if Soil Moisture is ON { getSoilValue(); } }
octopod_door.inoArduino
Code for Automatic Door Lock Control (NO BLUETOOTH VERSION)
/*********************************************************************************************************** OCTOPOD:A SMART HOME AUTOMATION PROJECT MADE BY SAKSHAM ARDUINO RFID DOOR LOCK CODELibrary Required - MFRC522 ************************************************************************************************************/#include #include #include  #define SS_PIN 10#define RST_PIN 9#define LED_G 5 //define green LED pin#define LED_R 4 //define red LED#define BUZZER 2 //buzzer pinMFRC522 mfrc522(SS_PIN, RST_PIN); // Create MFRC522 instance.Servo myServo; //define servo name void setup() { Serial.begin(9600); // Initiate a serial communication SPI.begin(); // Initiate SPI bus mfrc522.PCD_Init(); // Initiate MFRC522 myServo.attach(3); //servo pin myServo.write(0); //servo start position pinMode(LED_G, OUTPUT); pinMode(LED_R, OUTPUT); pinMode(BUZZER, OUTPUT); noTone(BUZZER); Serial.println("Put your card to the reader..."); Serial.println();}void loop() { // Look for new cards if ( ! mfrc522.PICC_IsNewCardPresent()) { return; } // Select one of the cards if ( ! mfrc522.PICC_ReadCardSerial()) { return; } //Show UID on serial monitor Serial.print("UID tag :"); String content=""; byte letter; for (byte i =0; i  
octopod_door_bluetooth.inoArduino
Code for Automatic Door Lock Control (Bluetooth Version)
/*********************************************************************************************************** OCTOPOD:A SMART HOME AUTOMATION PROJECT MADE BY SAKSHAM ARDUINO RFID DOOR LOCK CODELibrary Required - MFRC522 ************************************************************************************************************/#include SoftwareSerial BTserial(0, 1); // RX | TX#include #include #include #define SS_PIN 10#define RST_PIN 9#define LED_G 5 //define green LED pin#define LED_R 4 //define red LED#define BUZZER 2 //buzzer pinMFRC522 mfrc522(SS_PIN, RST_PIN); // Create MFRC522 instance.Servo myServo; //define servo namevoid setup(){ BTserial.begin(9600); // Initiate a serial communication BTserial.println("Waiting for connections..."); SPI.begin(); // Initiate SPI bus mfrc522.PCD_Init(); // Initiate MFRC522 myServo.attach(3); //servo pin myServo.write(0); //servo start position pinMode(LED_G, OUTPUT); pinMode(LED_R, OUTPUT); pinMode(BUZZER, OUTPUT); noTone(BUZZER); BTserial.println("Put your card to the reader..."); BTserial.println();}void loop(){ // Look for new cards if ( ! mfrc522.PICC_IsNewCardPresent()) { return; } // Select one of the cards if ( ! mfrc522.PICC_ReadCardSerial()) { return; } //Show UID on serial monitor BTserial.print("UID tag :"); String content =""; byte letter; for (byte i =0; i  

定制零件和外壳

This is the basic design that you can use if you want to make the enclosure out of Cardboard/ Wood like me! octopod_v2_ukTmIJ0uMl.f3dProper Enclosure that you can 3D Print! octo_2_v3_sii4tuCF7d.f3d

示意图

Pin configuration might be different with Arduino MKR 1010 Without Bluetooth
For Bluetooth connect Rx (HC-05) --> Tx (Arduino)
Tx (HC-05) --> Rx (Arduino)
and 5v to 5v
Ground to Ground ESP MAX
AO - - - - - - - - - - - - GPIO2
A1 - - - - - - - - - - - - - GPIO1

Soil Sensor MAX
analog sensor - - - GPIO3

Gas Sensor (MQ2) MAX
sensor - - - - - - - - - - - GPIO4

PIR Sensor MAX
sensor - - - - - - - - - - - -GPIO0

Relay MAX
1 - - - - - - - - - - - - - - - - - M0
2 - - - - - - - - - - - - - - - - - M1

制造工艺

  1. 家庭自动化应用
  2. 智能家居物联网的阴险恶意广告猎物
  3. RASPBERRY PI 家庭自动化
  4. 智能百叶窗
  5. IOT - 使用 ESP8266、Arduino 和超声波传感器的智能罐
  6. 使用 1Sheeld 的智能家居自动化和安全系统
  7. Tech-TicTacToe
  8. Arduino 倒数计时器
  9. 遥控保时捷汽车(Arduino 项目)
  10. 家庭监视器
  11. 使用 Arduino 和 ESP8266 的 WiFi 登录页面的智能门锁
  12. 工业 4.0 中的自动化