亿迅智能制造网
工业4.0先进制造技术信息网站!
首页 | 制造技术 | 制造设备 | 工业物联网 | 工业材料 | 设备保养维修 | 工业编程 |
home  MfgRobots >> 亿迅智能制造网 >  >> Manufacturing Technology >> 制造工艺

低成本检测机器人的设计与开发

1.简介

我们国家的基础设施正在迅速老化和恶化。目前没有机制彻底检查我们的桥梁、废物罐、管道和反应堆的状况。这些结构中的许多已经达到其设计寿命的终点,需要检查是否有损坏。与陆地上的这种情况一样,也需要检查美国海军舰艇和油轮的船体和甲板是否有腐蚀迹象。由于多种原因,许多旧结构,例如高桥和垃圾箱,通常难以探测或检查。最常见的原因是检查过程对人类有害,或者结构具有无法进入的部分。另一个常见的原因是当前的探针技术可能不足以准确检查此类结构。因此,人工检查不频繁、费力、昂贵、危险且容易出错。这个问题为制作精良的检查机器人提供了绝佳的机会。

检测机器人通常由资金充足的大型电气和机械工程师团队设计和开发。 Packbot 510 (http://endeavorrobotics.com/products) 等商用机器人的成本可能超过 100,000 美元。鉴于单个科学博览会项目的限制,这里的范围是设计、开发和测试检查机器人的廉价原型。 该项目的目的是开发一种小型、轻便且价格低廉的检测机器人原型,它可以保持附着在待检测的表面上。 该项目由以下任务组成:文献和现有设计的审查;需求说明;机器人设计;开发初始原型和测试;工程力学计算;使用 Python 对机器人进行编程;第二个原型的开发和测试;以及最终原型的开发和测试。

在物理构建机器人之前,使用 3D 建模软件 SketchUp 对机器人进行可视化并完善设计。该机器人由商用现成组件构建而成,包括用于控制机器人的 Raspberry Pi 3 模块。该设计通过反复测试得到迭代改进。 Python 代码是从头开始编写的,用于对机器人进行编程。随着设计的发展,必须同时修改硬件和软件。该原型已向华盛顿河保护解决方案和太平洋西北国家实验室的工程专家以及华盛顿州的高级设计班进行了演示

大学,三城。根据专家的反馈、工程力学计算和测试结果,第三个原型机被建造和编程。由此产生的机器人能够以合理的速度爬墙,并有多个摄像头来帮助导航和检查。该项目生产的机器人代表了一种独特的设计,其软件专门为此目的而编写。该原型作为一个灵活的平台,可以根据需要添加新的传感器以增强检测能力。

2.文献综述

在开始该项目的工作之前,我进行了文献回顾以评估现有的解决方案。目前可用的探头可分为固定式和移动式两种。

固定式探头是用于检查结构的最广泛使用的工具。它们提供有关结构特定部分的非常详细的信息,并可以对其进行持续监控。然而,一旦它们被定位在一个位置,观察范围就会受到限制。由于缺乏机动性,它们不适合监测大型结构。另一类由安装在机器人上的探头组成。由于探头可以自由移动,因此它们提供了更大程度的功能。目前市场上的大多数机器人都非常专注于特定任务或检查类型。一些机器人可能专门穿越水面、高海拔或沼泽半固体地形,但这些机器人都不适用于结构检查。

水上检查机器人AQUA 1 是一个很好的例子。 AQUA 是一种高度专业化且价格昂贵的检查机器人。它在水体床上爬行并对其区域进行 3D (3D) 扫描。它能够使用相机、传感器扫描和算法在水下遵循指定的路径。虽说是检测机器人,但对结构检测没有用处,因为它缺乏在铁质表面上的爬行能力。

另一个例子是 AETOS 艾拓思 2 空中无人机。 AETOS 艾拓思无人机是一种四轴飞行器,用于测量、景观测绘和应急响应。机器人本身是一个遥控四轴飞行器,悬挂着一个大功率相机。该相机能够捕捉和记录结构和景观的详细图像和视频。 AETOS 艾拓思无人机用途广泛,甚至可以从空中检查暴露的结构,例如桥梁。无人机的缺点是它不适合进行详细的结构检查,因为风可以在检查过程中移动无人机。无人机也不能在封闭结构内使用,因为它有坠毁的风险。 AETOS 艾拓思无人机需要频繁充电,不能长时间停留在空中。它也很昂贵,容易损坏,并且很难从碰撞中恢复。

目前可用的一些机器人配备了强大的传感器、多个摄像头和爬墙能力。此类机器人成本极高,无法大量部署进行检查。与这些机器人相关的损坏风险和更换成本也非常高。损坏是一个非常现实的考虑因素,因为截至 2017 年 3 月,几乎所有用于检查日本福岛第一核电站受损核反应堆的机器人都出现故障。 MagneBike 3 是昂贵机器人的一个例子 . MagneBike 是一种相当新的机器人,尚未进行商业销售,但目前正在测试和私人开发中。 MagneBike 是一种机器人自行车,它的两个轮子通过自由关节连接到主体。该关节允许机器人在任何铁质表面上自由移动,而不管轮廓如何。每个轮子的两侧还有两个杠杆,类似于训练轮。每个杠杆的长度略大于每个车轮的半径。杠杆用于将车轮与其连接的磁性表面断开,使其能够在内角上平稳地移动。 MagneBike 可以设置为支持高清摄像头并支持 3D 映射传感器,从而可以创建周围环境的 3D 模型。该机器人通过电缆进行控制和供电,并且是一种易于检索的系留设备。不过 MagneBike 的缺点是坏了很难更换,而且如果使用的零件有什么用的话,价格也相当昂贵。

类似的磁轮机器人是美国海军的多节段磁机器人 4 (MSMR)。 MSMR 是一种设计用于船体检查的海军机器人。虽然 MSMR 不是为地上结构检查而设计的,但它可以很容易地适用于检查结构。此外,检查船体和检查工业结构并不是不同的任务。 MSMR 是一个 3 节机器人,每个节都是一个装有电子设备的金属盒子,两个轮子连接在它的两侧。各段之间通过柔性或铰接连接器连接。

每个轮子都可以独立工作,当所有轮子一起工作时,机器人可以轻松地缩放3D障碍物。轮子是磁化的,可以支撑机器人。机器人的缺点是它没有系绳,因此只能由电池供电。这是不利的,因为它使机器人更难以控制并限制了机器人的检查寿命。 MSMR 目前也未发布,仅供海军使用。在可预见的未来,机器人可能会保持这种状态。

检查机器人的另一个例子是全方位爬墙微型机器人 5 . Microbot 是一个微型圆形机器人,重量仅为 7.2 克。它的直径为 26 毫米,高度为 16.4 毫米。该机器人目前处于测试的最后阶段,尚未商业化。该机器人支持 3 个具有 360° 转动能力的磁轮微电机。轮子允许它轻松地穿过大多数铁质表面。 Microbot 可以设置为支持单个微型相机。相机可以将简单的图像和视频发送回控制器。机器人也被拴住了。它通过可以进行绝缘保护的铜线连接到其控制器。该机器人虽然价格低廉,可以成群使用,但只能支持单个摄像头,而且系绳很弱。也没有任何扩展空间,无法支持任何传感器。

有些机器人设计使用吸盘或螺旋桨产生的负压附着在表面上。与磁轮相比,吸盘的移动性有限,不适合配备多个摄像头和传感器的重型机器人。此外,由于机械磨损,吸力会随着时间的推移而降低。负压系统需要相当大的功率和恒定的功率。如果断电,机器人将脱离表面。之前尝试过的每一种设计都有其优点和缺点,但都没有完全解决检查问题。文献综述使我能够调查景观,了解以前尝试过的内容,并提出自己的设计。

1。需求说明

检查机器人必须满足几个限制条件。机器人的第一个限制是尺寸。理想情况下,检查机器人应该很小。机器人将检查的一些空间的宽度和高度不到一英尺。本项目的尺寸限制为 25x25x25 厘米。较小的尺寸增加了机器人在复杂环境中的机动性和多功能性,例如桥梁。小尺寸的优势还在于机器人将消耗更少的功率并且更易于操作。机器人也必须被拴住。与无线机器人相比,系留机器人将能够更快、更可靠地发送更多数据。

机器人的控制器不必担心机器人离开无线信号的范围,也可以在发生事故或故障时轻松提取机器人。此外,检查机器人必须支持多个摄像头才能进行彻底检查和导航。从机器人到控制器的实时摄像机镜头对于机器人准确地通过它正在检查的结构并警告控制器即时危险是必要的。机器人需要满足的另一个限制是它需要能够爬上铁质表面。满足该限制的最简单方法是让机器人具有磁轮或磁体,从而使机器人能够轻松地缩放铁质表面。这是因为铁磁材料,如低碳钢、低合金钢和铁,是建造此类结构的主要材料。最后,机器人应该便宜,最好成本低于 200 美元。廉价的机器人易于更换,并且在检查旧结构时,机器人损坏也就不足为奇了。一个便宜的机器人也意味着可以购买更多的机器人用于一项任务,可以大大提高检查效率。

4.机器人的设计与开发

4.1.原型 1:乐高 EV3

为了设计出满足上述约束条件的机器人,我开始使用乐高 EV3 控制模块和其他乐高零件进行原型设计。我最初开始使用乐高积木进行原型设计,因为用乐高积木构建起来很容易,而且创建机器人也相当简单。 EV3 模块是一个可编程的机器人核心,用于控制乐高机器人,并且已经在家中可用。使用乐高积木,可以很容易地创建一个带有 4 个连接电机和轮子的坚固机器人身体。在开始使用 EV3 时,我试图为我的机器人设计一个扁平、紧凑的设计。由于乐高积木组合在一起的方式,当需要连接我的第三个 rd 时,这个想法开始失败 和 4 th 发动机。我无法将我的电机安装到我的控制模块上。接下来,我转向了有角度的设计,模块悬挂在我机器人的其余部分上方,电机从主机架上拱起。在设计了适合控制器下方舒适的主支撑框架后,我能够设计电机支架。支架是向下倾斜的臂,从主框架伸出并连接到电机上。电机完全固定在支架的末端,以防止测试过程中出现结构故障。为了进一步稳定电机及其支架,我使用刚性连接器将每个电机连接到最近的电机。连接器还防止一个电机比其他电机运行得更快,因为它可以将电机连接在一起并创建一个辅助框架。

在完成了乐高机器人的结构设计和建造之后,我转向了轮子的设计。对于车轮,我从 4 个标准尺寸的 EV3 车轮开始。每个轮子的半径为 17 毫米,宽度为 17 毫米。每个车轮还附带一个附加的空心橡胶轮胎。为了配置磁力运动的车轮,我首先拆下轮胎。拆下轮胎后,只剩下裸露的塑料轮毂。塑料上有很深的凹痕,一直覆盖着车轮的大部分。由于压痕,我无法直接将磁铁连接到车轮上。我用于乐高机器人的磁铁是来自 K&J Magnetics 6 的 D51-N52 磁盘 . D51-N52磁铁是直径为5/16”(8毫米)、厚度为1/16”的钕铁硼(NdFeB)盘状磁铁

(1.6 毫米)。我选择使用这些磁铁是因为它们足够小,我可以将它们的链条缠绕在车轮上并形成一个磁带。当直接粘在钢板上时,每个 D51-N52 的拉力为 2.05 磅(9.1 牛顿)。四个轮子包裹在磁铁中,磁性足以支撑乐高机器人,如图1所示。

我测试了将磁铁连接到机器人轮子的方法。我最初尝试将一张纸包裹在轮子上,然后将磁铁超级粘在这张纸上。这个想法行不通,因为纸太薄,无法为磁铁提供坚固的表面,而且不是

强度足以防止磁铁聚集在一起并离开车轮。接下来,我尝试用粘土或橡皮泥填充轮子上的孔,并在上面贴上磁铁。这个想法也失败了,因为两种材料都不会粘在强力胶上。在这两种想法都不起作用之后,我尝试看看这两种想法的混合是否可行。我用折叠和压缩的纸条填充轮子的凹痕。然后我用强力胶将条带固定到位。

之后,我将折叠起来的纸包起来,并在车轮周围用细金属线加固。强化纸是一种坚固但足够灵活的表面,我可以用强力胶粘住磁铁。在成功地将磁铁连接到所有四个轮子上后,我用胶带将每个轮子包裹起来,而不是使用轮胎。我选择不使用轮胎的原因是轮胎会因其厚度而过多地降低磁铁的拉力,而胶带不会在仍提供牵引力的同时显着降低拉力。包裹好轮子后,我将乐高轴穿过每个轮子,并用它把每个轮子连接到我的马达上。

车轮的安装标志着我的第一个原型开发的结束。我通过将其压在钢门上来测试原型。机器人能够紧紧地粘在门上而不会滑倒。该机器人未能满足几个设计限制:大于 25x25x25 厘米,成本超过 200 美元,没有系绳,需要电池,不支持摄像头。

然而,这个初始原型满足了一个关键目标。我的第一个原型的真正目的是帮助我了解如何使用磁铁将机器人有效地连接到铁质表面,并帮助我了解如何设计机器人和轮子来解决检测问题。

4.2 第二个原型的材料和组件选择

在用乐高积木搭建了我的第一个机器人原型之后,我决定在开始搭建之前选择组件,并在计算机上设计和可视化我的下一个原型。首先,我决定使用 Raspberry Pi 作为我未来原型的核心。我选择 Raspberry Pi 的原因是,尽管 Pi 非常轻巧紧凑,但它是一块相当强大的电路板。 Pi 可以连接到电机控制板,同时仍然具有 USB 和以太网功能。此外,Pi 是一款非常便宜的计算机,并带有免费的操作系统包。图2是树莓派3的照片。

接下来我决定使用 L298N 电机控制板来控制我的电机。 L298N 是一个相当简单的电机控制器,最多可以控制 2 个直流电机。电机控制器记录为能够处理高达 35 V 的电压。由于我想要使用的大多数电机都在 6 V-12 V 范围内,因此 L298N 非常适合我。该板本身非常小,只有 Raspberry Pi 的三分之一大小。由于这种简单性,很容易以较低的成本购买多个 L298N。我还决定从单摄像头开始,为我的第一个 Raspberry Pi 原型制作。我选择使用的摄像头是 Raspberry Pi NoIR 摄像头。

这款 NoIR 相机是一款兼容 Pi 的相机,专为夜视而设计。虽然桥梁等结构可能会被点亮,但坦克内部可能会很暗;所以我选择了 Pi NoIR 相机而不是标准的 Pi 相机。我还选择了 NoIR 摄像头,因为它是为 Raspberry Pi 打造的,比任何其他摄像头都更易于使用。

对于我的电机,我选择了标准的 6 V DC 塑料 Arduino 电机。我选择了这些电机,即使它们是 Arduino 电机,因为我知道我的驱动板可以在其电压限制内运行任何直流电机。我做了一个工程力学计算,如下所述,以估计所需的电机扭矩。塑料电机非常易于使用和接线,而且价格便宜。如果其中一个电机坏了,很容易用新电机更换。电机还配有塑料轮子,这些轮子大到足以支撑和移动机器人,但又小到易于控制。除了我的两个驱动电机之外,我还想使用另一个电机在机器人下方创建一个杠杆机构来支撑它。该机构将用于将机器人的前端抬离地面,以便它可以更好地附着在铁质表面上。我计划将机器人安装在一个简单的塑料机器人底盘上,并使用金属条为底盘本身无法容纳的任何部件形成一个高架平台。我决定用一个 4-AA 电池组或两个 2-AA 电池组为 L298Ns 供电。 Raspberry Pi 旨在从延伸到电源插座的 USB 电缆接收电源。该机器人将由使用 USB 电缆连接的有线 Xbox 360 控制器控制。我决定使用 Xbox 控制器,因为它有一个方向键,非常适合控制机器人的运动。它还具有额外的按钮,可以分配给机器人代码中的不同任务,例如相机控制。对于磁铁,我决定继续使用 D51-N52 磁铁,因为我已经证明使用它们在轮子周围创建磁带是用我的第一个原型创建磁轮的可行方法。

4.3 第二个原型的计算机辅助设计 (CAD)

在决定了我将用来制作我的 2 nd 的材料和组件之后 原型,我继续构建我的原型的 CAD 图纸,这样一旦我指定的零件到达,我就可以轻松地构建它。为了制作CAD图纸,我使用了一个叫做SketchUp的软件,因为该软件是免费的,易于我自己学习,并且易于使用。使用我计划用来制作第二个 nd 的零件的在线和物理测量(一旦零件到达) 原型机器人,我能够构建我的机器人原型的逼真 3D CAD 图纸,如图 3 所示。然后我进一步完善了我的原型,同时考虑了最佳螺钉位置。经过几次添加设计特征和细化细节的迭代后,我能够得到一个令人满意的机器人 3D 模型。这有助于简化我项目的硬件部分,因为我只需要使用真实部件构建计算机模型的物理副本。

4.4 原型 2a:预制底盘

构建原型 2a

在我的所有零件都已到达并且我的 CAD 图纸完成后,构建我的机器人是一件简单的事情。在构建机器人时,我首先为要安装的 Raspberry Pi 钻孔。为了绘制我将在塑料底盘上钻孔的点,我将 Pi 放在底盘后端的顶部,并用一支细铅笔标记底盘上每个螺丝孔下方的区域。然后我选择了一个比 Pi 上的螺丝孔稍大的钻头来钻每个孔。然后我同样在机箱前部钻孔以容纳驱动板。为了安装驱动板和 Raspberry Pi,我使用了 #4-40 螺栓和螺母。安装两块板后,我使用提供的螺钉连接后轮和

电机在底盘上预切孔。底盘、电机和后轮配有螺母、螺栓和说明,因此将这两个组件连接到底盘上很容易。

对于这个原型,我使用重型双面胶带将第三个电机连接到机器人的底部,直接位于两个驱动电机之间。然后我拿了四根冰棒棍,把它们纵向粘在一起,两根一组。结果,我得到了两根很粗的冰棒。然后我将冰棒棒切成两半,并在每半冰棒棒的末端勾勒出电机轴的末端。然后我用钻头在每根新杆上钻出一个孔,以容纳电机轴。结果,我得到了 4 根粗的、半长的、钻孔的冰棒棍。然后我选择了最合适的两根杆,并将它们连接到中间电机轴的每一端。我用热胶固定冰棒棍。这个机动装置的目的是作为一个升降机,一旦电机启动,就会将机器人推离它所在的表面。该设备旨在使机器人能够将自身从铁质表面分离。它还将使机器人能够将其主磁轮抬离地面,以便它可以从另一个表面将自己附着在铁质壁上。这是该项目的几个独特设计特点之一。磁轮设计是另一项创新功能。

安装第三个电机后,我使用穿孔金属吊带在驱动板和 Raspberry Pi 上方创建了桥状结构。吊带用作可以安装附加部件的辅助表面。由于穿孔,很容易在底盘上钻孔以安装金属吊带,并用剩余的螺栓和螺母固定。在机器人前面的吊带桥顶部,我连接了第二个驱动板来控制第三个电机,因为每个板只能控制两个电机。我能够使用双面胶带连接驱动板。使用更多的双面胶带,我能够将 4-AA 电池座连接到后金属机库的顶部,为主驱动板供电。我还在我的机器人前面连接了两个 2-AA 电池座,为第二个驱动板供电。

我通过将 Raspberry Pi NoIR 相机热粘合到金属吊架胶带桥的前面来完成第二个原型。构建机器人后,剩下的就是磁化轮子。我从车轮上拆下轮胎,并在每个车轮上贴上一层双面胶带。塑料轮子和电机如图 4 所示。我将小圆形 D51-N52 磁铁围绕每个轮子的轮辋圈成一圈,这样每个轮子上就有两个环。添加所有磁铁后,我用一层胶带覆盖了两个轮子以保护磁铁。为了磁化后轮,我将磁铁热粘在车轮周围的环中,然后将它们包裹在胶带中。使用胶带的原因是它足够薄,不会显着降低拉力,但足够坚固,可以保护磁铁。

接线原型 2a

在连接好我的机器人的所有组件后,我开始将它们连接在一起。 Raspberry Pi 的电源通过其侧面的微型 USB 端口输入。然后我将电池组连接到它们各自的驱动板上。电机还使用电机随附的电线连接到驱动器板上。我将电线焊接到电机上的电源线上,并用螺丝将它们连接到驱动板上。然后我将 Pi 上的 GPIO 引脚连接到驱动板。 GPIO 引脚是 Raspberry Pi 上的通用输入/输出引脚。有些引脚用于接地和电源,而有些可用于通过电线发送信号。我将 GPIO 2 和 6 连接到一个驱动板,将 4 和 9 连接到另一个驱动板。这些引脚是 5 V 引脚,用于通过驱动板实现电机运动和控制。然后我将引脚 11、12、13 和 15 连接到第一个驱动板,并将引脚 16、18 连接到另一个驱动板。这些引脚用于发送实际的电机控制信号。每个电机

需要两个引脚进行控制,并且由于机器人使用了 3 个电机,因此驱动板需要 6 个连接的信号 GPIO 引脚用于电机控制,此外每块板需要 5V 和接地。连接必要的 GPIO 引脚后,我在 Pi 和笔记本电脑之间连接了以太网电缆,这样我的笔记本电脑就可以与 Raspberry Pi 建立远程桌面连接,无需显示器、键盘和鼠标。我还通过 USB 将一个有源集线器连接到我的 Pi。集线器连接到Xbox控制器,这样我就可以通过Xbox控制器控制机器人。

编程原型 2a

设计我的 2 nd 最难的部分 原型是代码。我的第一个原型只是一个硬件模型;它没有运行代码。我的理由是,我的第一个 st 原型,尽我所能,我无法让所有 4 个电机与代码同时移动。第一个原型的创建主要是为了测试磁轮概念,并帮助我为未来的原型设计一个理想的设计。在 Raspberry Pi 上,我使用 Python 进行编码,因为它是我理解的 Raspberry Pi 的唯一语言。但即使在我开始编写代码之前,我也必须将我的机器人设置为与我的笔记本电脑兼容的远程桌面。

为了设置我的 Pi,我必须临时将显示器、键盘和鼠标连接到 Raspberry Pi。之后,我启动了 Pi 并通过以太网为它设置了一个静态 IP。我选择了 192.168.1.10,因为它是一个简单易用的地址。要设置IP,我必须编辑

/ect/dhcpcd.conf 在我的树莓派中。 dhpcd.conf 文件控制 Pi 的 IP 和网络连接;要设置静态 IP,我必须将这些行添加到文件的开头:

接口eth0

静态 ip_address=192.168.1.10 静态路由器=192.168.1.1

在设置了 Pi 的静态 IP 后,我安装了 Linux 包 tinyvncserver。 Tightvncserver 是一个允许在 Raspberry Pi 上设置 VNC(虚拟网络连接)服务器的软件包。远程桌面连接通过 VNC 服务器运行。设置 VNC 服务器后,我能够通过我的

创建到我的 Raspberry Pi 的远程桌面连接

笔记本电脑。在确认我可以访问我的 Pi 后,我断开了显示器、键盘和鼠标的连接。然后我开始为机器人编码。

首先,我需要一种方法来找出哪个 GPIO 引脚对应于我的 Pi 上的哪个电机。每个 GPIO 引脚在激活时都会以恒定速度向前或向后旋转单个电机。因此,每个电机都有两个对应的 GPIO 引脚,一个正向运动控制器和一个反向运动控制器。为了找出每个 GPIO 引脚对应的内容,我编写了一个程序来单独测试每个 GPIO 引脚,以便我能够记下哪个 GPIO 引脚做了什么。我通过对我的程序的评论记录了我的观察:

将 RPi.GPIO 导入为 GPIO from time import sleep

GPIO.setmode(GPIO.BOARD)

GPIO.setup(12,GPIO.OUT) #Left Forward GPIO.setup(11,GPIO.OUT) #Left Forward GPIO.setup(13,GPIO.OUT) #Right Forward GPIO.setup(15,GPIO.OUT) #右向后 GPIO.setup(16,GPIO.OUT) #Lifter Out GPIO.setup(18,GPIO.OUT) #Lifter In

GPIO.output(12,GPIO.HIGH)

sleep(2) GPIO.output(12,GPIO.LOW)

睡眠(1)

GPIO.output(11,GPIO.HIGH)

sleep(2) GPIO.output(11,GPIO.LOW)

睡眠(1)

GPIO.output(13,GPIO.HIGH)

sleep(2) GPIO.output(13,GPIO.LOW)

睡眠(1)

GPIO.output(15,GPIO.HIGH)

sleep(2) GPIO.output(15,GPIO.LOW)

睡眠(1)

GPIO.output(16,GPIO.HIGH)

睡眠(0.5) GPIO.output(16,GPIO.LOW)

睡眠(1)

GPIO.output(18,GPIO.HIGH)

睡眠(0.5) GPIO.output(18,GPIO.LOW)

睡眠(1)

接下来,我需要使我的 Raspberry Pi 能够接收和理解 Xbox 控制器发送给它的信号的软件或代码。 Xboxdrv 是适用于 Linux 的 Xbox 控制器驱动程序。我安装了它并用它来尝试将我的 Pi 连接到我的 Xbox 控制器。通常在提示符中运行命令“sudo xboxdrv”将在命令提示符窗口中显示连接的 Xbox 控制器的输入。但是,我的 Xbox 控制器不是 Microsoft 制造的,因此 xboxdrv 不正常支持它,我通过运行以下命令解决了该问题:

sudo xboxdrv –device-by-id 1bad:f02e –type xbox360 –detach-kernel-driver –mimic-xpad

在研究了如何使用 xboxdrv 以及如何使用代码修改正常功能后,我能够创建此命令。使用此命令,我使用其设备 ID 1bad:f02e 将连接的控制器识别为 Xbox 控制器。这个命令允许我在命令提示符中查看来自控制器的输入。我需要一种从

访问输入值的方法

Python program, so that I would be able to use the values to control my robot. After some searching online, I found a Python program that received and displayed Xbox controller input values on Github 7 . The code was by martinohanlon. I downloaded the code onto my Raspberry Pi and started working on modifying it to control the motors on the robot based on the values it received. The problem I faced was that the code was so long and complex, that I was unable to tell where the input value from the Xbox controller was read. To solve that problem, I went through the program and I made a series of print statements that printed the variables of the program as it ran. Through the process of observing the values as buttons were pressed, and deleting print statements, I was able to find the main event system in the program at line 265:

#run until the controller is stopped while(self.running):

#react to the pygame events that come from the xbox controller for event in pygame.event.get():

#thumb sticks, trigger buttons

if event.type ==JOYAXISMOTION:#is this axis on our xbox controller

if event.axis in self.AXISCONTROLMAP:#is this a y axis

yAxis =True if (event.axis ==self.PyGameAxis.LTHUMBY or event.axis ==self.PyGameAxis.RTHUMBY) else False

#update the control value self.updateControlValue(self.AXISCONTROLMAP[event.axis],

self._sortOutAxisValue(event.value, yAxis)) #is this axis a trigger

if event.axis in self.TRIGGERCONTROLMAP:#update the control value

self.updateControlValue(self.TRIGGERCONTROLMAP[event.axis], self._sortOutTriggerValue(event.value))

#d pad

elif event.type ==JOYHATMOTION:#update control value

self.updateControlValue(self.XboxControls.DPAD, event.value)

#button pressed and unpressed

elif event.type ==JOYBUTTONUP or event.type ==JOYBUTTONDOWN:#is this button on our xbox controller

if event.button in self.BUTTONCONTROLMAP:#update control value

self.updateControlValue(self.BUTTONCONTROLMAP[event.button], self._sortOutButtonValue(event.type))

Within the main event system, I searched for the component that handled the directional pad (d- pad) on the Xbox controller, as I was planning on using it to control the motors on the robot.

After finding the directional pad control component, I added some statements to the end that sent signals through the GPIO pins to the motors whenever a certain direction was pressed on the D- Pad:

#d pad

elif event.type ==JOYHATMOTION:#update control value

self.updateControlValue(self.XboxControls.DPAD, event.value) if event.value ==(0,1):#Forward

GPIO.output(11,GPIO.HIGH) #Left Forward GPIO.output(13,GPIO.HIGH) #Right Forward

elif event.value ==(0,-1):#Backward GPIO.output(12,GPIO.HIGH) #Left Backward GPIO.output(15,GPIO.HIGH) #Right Backward

elif event.value ==(1,0):#Right GPIO.output(11,GPIO.HIGH) #Left Forward

GPIO.output(15,GPIO.HIGH) #Right Backward elif event.value ==(0,1):#Left

GPIO.output(12,GPIO.HIGH) #Left Backward GPIO.output(13,GPIO.HIGH) #Right Forward

GPIO.output(12,GPIO.LOW) GPIO.output(11,GPIO.LOW) GPIO.output(13,GPIO.LOW) GPIO.output(15,GPIO.LOW)

After successfully configuring the motors, my next challenge was to code the Raspberry NoIR camera. The Pi camera came with a Python camera package. Coding it so that pictures were taken or videos were recorded every time certain buttons on the Xbox controller were pressed was fairly easy.

#button pressed and unpressed

elif event.type ==JOYBUTTONUP or event.type ==JOYBUTTONDOWN:#is this button on our xbox controller

if event.button in self.BUTTONCONTROLMAP:#update control value

self.updateControlValue(self.BUTTONCONTROLMAP[event.button], self._sortOutButtonValue(event.type))

if event.button ==0 and event.type ==10:camera.capture(‘image’ + imgNum + ‘.jpg’) imgNum =imgNum + 1

if event.button ==1 and event.type ==10:if isRec ==False:

camera.start_recording(‘video’ + recNum + ‘.h264’) isRec =True

else:

camera.stop_recording() isRec =False

if event.button ==1 and event.type ==10:if isPrev ==False:

camera.start_preview() isPrev ==True

else:

camera.stop_preview() isPrev ==False

For this portion of the code, I did have to make variables to serve as counters every time a picture or video was taken, so that they would be numbered. I also had to make Boolean variables that determined whether a video was being taken, to prevent the robot from trying to take another video while one was already recording. After coding the camera, I was finished with programming the robot.

Testing Prototype 2a

The first thing I recorded was the mass of the robot. Using a standard kitchen scale, I recorded the mass of the robot to be 0.66 kg. While not being especially light, prototype 2a was significantly lighter than prototype 1, which had a mass of 0.92 kg without cameras. Prototype 2a was also measured to be 15 cm long x 18 cm wide x 12 cm tall. Prototype 2a could meet the size constraint, which was another improvement over prototype 1. Prototype 2a could stick to ferrous surfaces. While the motor of prototype 1 could not overcome the magnetic pull force and move the robot, prototype 2 could move the robot downward or sideways but not upward when attached to a vertical steel wall. The 3 rd motor on the robot that was planned for lifting of off surfaces was also unable to function because of a lack of torque. Prototype 2a had only mounted 1 camera, and thus failed the multiple camera requirement. However, prototype 2a was an improvement over prototype 1. Prototype 2a only cost about $120 to build compared to prototype 1, which cost more than $400 even without cameras.

4.5   Engineering Mechanics Calculations

I calculated force and torque using equations from the literature as shown below.

Force and Torque Equations

Figure 5 shows a sketch of the robot climbing an inclined plane and the forces present.

For a robot at rest in the plane: m*(N1 + N2) =Mgsinq (1)
Perpendicular to the plane: N1 + N2 =F(M1) + F(M2) + Mgcosq (2)
  For a vertical wall q =p/2.   N1 + N2 =F(M1) + F(M2); m*(N1 + N2) ≥ Mg   (3)
  The required magnetic force is   F(M1) + F(M2) ≥ Mg/m   (4)

With two motors, the torque needed from each is t ≥ MgR/2                                              (5)

Force Calculation for Magnet Placement

The paper by Wang and Kimura (IEEE 2014) shows that the friction coefficient for tape covered wheel on metal m =0.45. The mass of my robot prototype 2a is M =0.655 kg. The acceleration of gravity g =9.81 m/s 2 . From equation (4), the required magnetic force =14.5 Newton. The pull force of the N52 magnet away from a steel surface has been tested and reported by the manufacturer KJ Magnetics. It is shown for different distances in Figure 6. The thickness of the duct tape I used is 0.01”. At a distance of 0.01”, the pull force is 1.26 lb per magnet according to the data plotted in Figure 6. In SI units, this pull force per magnet =5.6 Newton. To get a magnetic force of at least 14.5 Newtons calculated from equation (4), we need at least 3 magnets in contact at all times (one per wheel). The m value of 0.45 is only an estimate. If it is lower (say 0.25), the required magnetic force is higher, about 26.1 Newton.

Thus, for safety, we need 2 rows of magnets per wheel.

Torque Calculation for Motor Selection

Torque is important, because it is the rotational force (force multiplied by radial distance) that the motor must generate to move the robot. From equation (6), we know that the torque must be greater than MgR/2 for each of the front wheel motors. For prototype 2a, this works to torque being more than 0.08 Newton meter per motor. The plastic encased motors I used in the prototype 2a (Figure 4) were rated by the manufacturer as 0.1 Newton meter each. In my tests, prototype #2a could stay attached to a vertical surface and climb down. However, it struggled to climb up the vertical wall. The torque was barely enough to fight gravity. The results of this test of prototype #2a show that the force and torque calculations were correct. The lesson I learned from building and testing prototype 2a is that the robot should be lighter or a motor with greater torque should be used. The use of CAD and mechanics calculations made the design and development process systematic and logical. Figure 7 shows the underside of prototype 2a. The three motors and the popsicle sticks can be clearly seen.

4.6     Prototype 2b:Pre-Made Chassis

After developing and testing Prototype 2a, I realized that there were multiple changes I could make to it to make it fit the constraints better, without constructing an entirely new bot. So instead of starting from scratch, I decided to make a series of modifications and upgrades to Prototype 2a, resulting in the creation of Prototype 2b.

Building Prototype 2b

The first change I made to prototype 2a was that I removed all the motors. The motors did not work as expected for climbing up a vertical wall because of lack of torque; so, all of them had to be replaced or removed. I replaced the drive motors with two new larger motors, and I simply removed the third motor without replacement. The new motors were Uxcell 12V high torque gearbox motors. They were chosen, because their torque rating was much higher than that of the motors they replaced, but these new motors were heavier. I fastened both motors to the underside of the robot, where the previous motors had been using strips of double sided tape for preliminary testing. The new motors had a mass almost 100 g more than that of the old motors and so adding both new motors added almost 200 g to the mass of the robot.

I removed the driver board that controlled the third motor, because there was no longer a third motor on the robot, so there was only a need for a single driver board. Next, I removed all of the battery packs on the robot. Battery packs add unnecessary mass to a robot, and only limit its inspection life. Additionally, using batteries increases chances of motor failure when the robot is in deployment, because batteries might run out of battery power in the middle of a run, resulting in the need for an emergency retrieval. I then moved the remaining driver board onto the metal hanger above the Raspberry Pi, where the 4-AA battery pack had been previously. This allowed me to remove the metal hanger at the front of the robot because it was not being used. I also removed two posts with magnetic disks at the rear of the robot that were included in Prototype 2a to increase the stability of the rear. I found out through testing that the posts were not needed.

At this stage, I encountered a major problem. My wheels were no longer compatible with my motors because the new motors had a different shaft compared to the old motors. I tried drilling and cutting the wheel wells to make the wheels fit the motors, but neither solution worked. After some research on what items fit a D shaped motor shaft, I found out that oven knobs often fit D shafts. After buying some oven knobs, I tested them to see if they attach to the motors. After finding out the oven knobs were compatible with the new motors, I sawed the top off the oven knobs, resulting in flat disks that fit onto the new motors. I then drilled out the wheel well on the wheels, after which I superglued the disks to the wheels. By supergluing the disks to the wheels, I made it so that they would be able to attach to the motor. After attaching the wheels and motors, I set up the cameras. I hot glued the Pi NoIR camera to the back of the robot and made it face backwards for my rear-view camera. I then took a wide-angle, fish-eye camera, and hot glued it to the front of my robot facing forwards for my main camera. I then double sided taped and hot glued an endoscopic inspection camera to the front rim of the chassis facing downwards. The use of oven knobs to connect wheels to the new motor shaft is an innovative solution developed in this project.

Wiring Prototype 2b

After modifying prototype 2a, there were many components to re-wire. I had to re-solder a wire to the power leads of the motors and connect it to the remaining driver board. I then removed all of the wires connected to GPIO 4, 9, 16, or 18, as they were no longer in use. I also decided to use a 12 V power cable to power the driver board instead of a battery pack. To do so, I cut the output end of the power cable off, so that all that remained was the adapter and a length of wire. I then separated the two strands of power wire, one being positive and the other being negative, and stripped the wires so that both wires were exposed at the end. After twisting and tightening the exposed wire, I connected the positive wire to the ground slot on the driver board, and the negative wire into the voltage slot on the driver board. I left the NoIR camera connected to the Pi, but I connected both the other cameras to my laptop so that my laptop directly received feeds directly from the cameras instead of getting them through the Pi, with the exception of the NoIR camera. To finish, I swapped the Xbox Controller with a Super Nintendo Entertainment System (SNES ) controller. An SNES controller is a much lighter and simpler controller than an Xbox controller and unlike the Xbox controller which requires a powered hub for power, an SNES controller can be powered by the Raspberry Pi. The two controllers are shown side by side for comparison in Figure 8.

Programming Prototype 2b

Since the Raspberry Pi had already been completely set up with the previous prototype, I was able to dive straight into programming. While no new code was needed to test the motors, since the previous motor test program worked, a new controller code became necessary because I changed the controller and was no longer using an Xbox controller. Because of the simpler nature of the SNES controller, there was no driver similar to xboxdrv for the SNES controller.

The Pi is capable of interpreting the input from the SNES controller by default. After doing some research and looking into how to interact with an SNES controller through Python, I wrote the following controller program from scratch:

import pygame

import RPi.GPIO as GPIO GPIO.setmode(GPIO.BOARD)

GPIO.setup(12,GPIO.OUT) #Left Backward GPIO.setup(11,GPIO.OUT) #Left Forward GPIO.setup(13,GPIO.OUT) #Right Forward GPIO.setup(15,GPIO.OUT) #Right Backward

global hadEvent global x

global y global a global b global up global down global left global right

hadEvent =False x =False

y =False a =False b =False up =False

down =False left =False right =False

pygame.init()

pygame.joystick.init()

j =pygame.joystick.Joystick(0) j.init()

def game_controller(events):global hadEvent

global x global y global a global b global up global down global left global right

for event in events:

if event.type ==pygame.JOYBUTTONDOWN:hadEvent =True

x =j.get_button(0) y =j.get_button(3) a =j.get_button(1) b =j.get_button(2)

if x ==1:x =True

print(“x”) elif y ==1:

y =True print(“y”)

elif a ==1:

a =True print(“a”)

elif b ==1:b =True print(“b”)

elif up ==1:up =True print(“up”)

elif event.type ==pygame.JOYBUTTONUP:hadEvent =False

x =j.get_button(0) y =j.get_button(3) a =j.get_button(1) b =j.get_button(2)

if x ==1:

x =False elif y ==1:y =False elif a ==1:a =False elif b ==1:b =False

elif up ==1:up =False

elif event.type ==pygame.JOYAXISMOTION:hadEvent =True

if event.axis ==1:

if event.value <=-1:

up =True print(“up”)

elif event.value>=1:down =True print(“down”)

else:

down =False up =False

elif event.axis ==0:

if event.value <=-1:left =True print(“left”)

elif event.value>=1:right =True print(“right”)

else:

right =False left =False

while True:game_controller(pygame.event.get())

if up ==True:#Forward GPIO.output(11,GPIO.HIGH) #Left Forward GPIO.output(13,GPIO.HIGH) #Right Forward

elif down ==True:#Backward GPIO.output(12,GPIO.HIGH) #Left Backward GPIO.output(15,GPIO.HIGH) #Right Backward

elif right ==True:#Right GPIO.output(11,GPIO.HIGH) #Left Forward GPIO.output(15,GPIO.HIGH) #Right Backward

elif left ==True:#Left GPIO.output(12,GPIO.HIGH) #Left Backward GPIO.output(13,GPIO.HIGH) #Right Forward

else:

GPIO.output(12,GPIO.LOW) GPIO.output(11,GPIO.LOW) GPIO.output(13,GPIO.LOW) GPIO.output(15,GPIO.LOW)

This code operates by importing Pygame, which is a Python package. Pygame is used for constructing videogames through Python. It adds several features, such as interpreting and translating input values from a video game controller. Because of the simplicity of an SNES controller, there were no extra steps needed. Towards the beginning of the program, I defined the GPIO pins to be used for motor control. I then listed variables I planned to use, and assigned the connected controller to pygame.joystick() and then j. I then created an event system where a value sent by the controller is defined as an event, for example, pressing a button or moving a joystick. I then specified the events I care about, such as movement on the directional pad (d- pad) or a button being pressed. I assigned a value of 1 to a variable if the event it is connected to occured. I also wrote additional code to convert the numeric value 1 to the Boolean True. At the end, there is an infinite loop that fetches the values of events that were triggered. If any of the d- pad values are triggered, the program sends signals to the motors through the GPIO pins. After running this code, the robot responded smoothly to the SNES controller. I did not need any other code for controlling this prototype.

Testing Prototype 2b

Once again, I started by recording the mass of the robot. Using a standard kitchen scale, I recorded the mass of the robot to be 0.71 kg. Prototype 2b ended up being heavier than prototype 2a, despite the removal of the battery packs, but this can be attributed to the motors which were heavier in prototype 2b. Prototype 2b was measured to be 15 cm long x 18 cm wide x 12 cm tall. Prototype 2a and 2b are the same size despite the changes between the two, the overall structure of the robot did not change. Prototype 2b was once again able to meet the size constraint. Prototype 2b had the ability to attach to ferrous surfaces and was the first prototype that could climb up on vertical ferrous surfaces. Figure 9 shows Prototype 2b climbing a vertical steel door. Prototype 2b mounted 3 cameras, and all of them sent back acceptable feeds, which was a large improvement over prototype 2a. Prototype 2b cost $170 to build compared to the $120 of prototype 2a. This increase can be attributed to the cost of cameras and the cost of better motors.

4.7     Prototype 3:Custom Polycarbonate Chassis

After building the last two prototypes, I wanted to apply the knowledge I had gained to create a new prototype that was smaller, more compact, and more efficient. To do this, I planned to design my own chassis, and refrain from using tapes and superglue to hold parts together.

Building Prototype 3

To start building my robot, I took a polycarbonate sheet and cut my chassis out of it. For my chassis, I chose a simple 6 cm wide x 11 cm long rectangle. I chose that size and shape because it was simple and based off of preliminary measurements I took, it was the smallest feasible size for mounting the parts I had chosen. After cutting out the chassis with a saw, I smoothed out the edges and corners with a file and sandpaper. I then set the Raspberry Pi on the rear end of the chassis and marked where all of the holes were, so that I would be able to drill them out. I then set the rear wheel on the underside of the chassis and marked holes for it. I also marked holes for the motors I chose at the front of the chassis. The motors I chose were Pololu 12 V gearbox motors with a gear ratio of 298:1. The motors also came with mounting brackets that attached to the motors and had holes for screws. I finally marked a large hole between the Pi and the motors for the inspection camera.

After drilling all of the holes, I screwed down all of the parts except for the Pi. Before I screwed down the Pi, I laid down a thin sheet (4 mm thick) of packing foam underneath where the Pi would be to absorb shock and prevent contact between the metal on the Pi and the bolts and nuts on the robot. I also attached a folded metal hanger tape with the same bolts as the Pi. The hanger tape formed a bridge over the Pi. I cut a smaller 4.5 cm wide x 5.5 cm long piece of polycarbonate to screw to the top of the metal hangar. I screwed a driver board to the top of the smaller polycarbonate. For the wide-angle camera, I folded and cut thin scrap metal to form a pouch for the camera with a hole for the lens. The pouch had sides that folded in and held the camera. The pouch also had a flat bottom that extended out to either side. I screwed the metal pouch down with two of the screws that also held the motors. I slid the inspection camera down into the hole that had been drilled for it. The Pi NoIR camera was held by a retaining block that was hot glued to the top of the Ethernet port on the Pi. For the wheels, I used 60 mm diameter x

8 mm thick Pololu plastic wheels. To magnetize the wheel, I covered it in a thin layer of double sided tape and put the magnets in a ring around it. I the covered the magnets with a single layer of duct-tape for protection and traction. After finishing the wheels, I attached a 3V LED light on either side of the wide-angle camera holder. I also used double sided tape to attach an ultrasonic sensor to the bottom of the robot.

The robot utilizes an HC-SR04 ultrasonic distance sensor. The HC-SR04 is a very common and popular hobby ultrasonic distance sensor. The sensor is also the least expensive and easiest to use of its type to demonstrate sensor integration. The HC-SR04 is designed mainly with compatibility and simplicity in mind, allowing it to be easily connected to a Raspberry Pi or Arduino.

The HC-SR04 functions by sending a sound wave, which bounces off the object at which the sensor points, and then receiving the sound wave. The time between the sending and the reception of the sound wave is recorded and output. The time can then be multiplied by the speed of sound and divided by 2 to identify the distance between the sensor and the surface it is pointed towards. The HC-SR04 has 4 pins for connection purposes. The pins are ground, voltage, trigger, and echo. The ground pin is to be connected to ground. The voltage pin is to be connected to a+5V source. The trigger pin will cause the sensor to produce a sound wave for as long as it is receiving +3V. The echo pin sends back +5V in a burst as long as the wait time for the sensor to receive the signal. The sensor has a range of 2 cm to 400 cm. On my robot, the HC-SR04 serves to demonstrate that an ultrasonic sensor can be mounted underneath the robot. A more expensive, advanced ultrasonic sensor can be mounted to measure the thickness of the metal surface and identify degradation.

Wiring Prototype 3

For the wiring of prototype 3, many elements stayed the same from prototype 2b but one changed. Because the Pololu wheels blocked the micro USB port on the Pi, I was unable to use it for power. After some research, I found that I could use the GPIO pins instead. I cut a USB to micro USB cable so that one portion was the USB end and a length of cable. Within the cable were two separate wires. I split and stripped those wired. I then soldered the exposed parts of the wires to the female end of a breadboard jumper. I covered my work with heat shrink tubing. I used a multimeter to tell which end was positive voltage and which end was negative. I connected the positive wire to GPIO 9, and the negative end to GPIO 14. Those two GPIO’s were 5 V &ground respectively. After connecting the USB end of the charging cable to a 5 V adapter, the Pi ran perfectly. Once again, wires were soldered to the leads of my motors, and connected back to my driver board. The driver board was connected to GPIO 11, 12, 13, &15 for control and GPIO 2 &6 for 5V and ground. The driver board was also connected to a 12 V power supply. The LED lights were wired and soldered in parallel. They were attached a 330Ω resistor, GPIO 16 &18 for power, and GPIO 9 for ground. The ultrasonic sensor which was added to this prototype was wired to GPIO 4, 29, 30, and 31. Pin 4 was used for voltage, 29 was for output, 31 was for input, and 30 was for ground. The NoIR camera was once again connected to the Pi, while the other cameras are connected to my laptop. The robot is still controlled by a USB SNES controller. The wiring diagram is shown in Figure 10.

Programming Prototype 3

To save myself the work of setting up and configuring the Pi, I moved the SD card from prototype 2b to prototype 3. Because the only new need of code for prototype 3 was for the ultrasonic sensor, I mainly just simplified and commented my SNES code, only adding a few extra lines, as shown below.

#Developed By Nikhil Devanathan 2017

#Program to control Raspberry Pi robot with wired USB SNES controller #Uses directional pad (d-pad) for motor movement

#Leaves button and triggers open for mapping

#Imports necessary packages into python

import pygame #Package that is used for game controller mapping import RPi.GPIO as GPIO #Allows control over binary pins on Pi from gpiozero import DistanceSensor

#Sets GPIO pins for motor control GPIO.setmode(GPIO.BCM)

GPIO.setup(18,GPIO.OUT) #Left Backward GPIO.setup(17,GPIO.OUT) #Left Forward GPIO.setup(27,GPIO.OUT) #Right Forward GPIO.setup(22,GPIO.OUT) #Right Backward GPIO.setup(23,GPIO.OUT) #Light1\

GPIO.setup(24,GPIO.OUT) #Light2/ Work together to power LED lights

#Conifgures ultrasonic sensor

ultrasonic =DistanceSensor(echo =6, trigger =5, threshold_distance =0.02)

#Creates variables for controller mapping

global hadEvent global x

global y global a global b global up global down global left global right

#Assigns Variables for controller mapping hadEvent =False

x =False y =False a =False b =False up =False

down =False left =False right =False

#Initializing pygame and controller pygame.init() pygame.joystick.init()

j =pygame.joystick.Joystick(0) j.init()

#Defining controller event system def game_controller(events):

#Defining variables for use in controller event system

global hadEvent global x

global y global a global b global up global down global left global right

#Searches for an event in the system for event in events:

#If a button is pressed

if event.type ==pygame.JOYBUTTONDOWN:#Set map values

hadEvent =True

x =j.get_button(0) y =j.get_button(3) a =j.get_button(1) b =j.get_button(2)

#If a button is released

elif event.type ==pygame.JOYBUTTONUP:#Set map values

hadEvent =False x =j.get_button(0) y =j.get_button(3) a =j.get_button(1) b =j.get_button(2)

#If there is axial montion on the directional pad

elif event.type ==pygame.JOYAXISMOTION:

#Set values for y axis hadEvent =True

if event.axis ==1:

if event.value <=-1:up =True

elif event.value>=1:down =True

else:

down =False up =False

#Set values for x axis elif event.axis ==0:

if event.value <=-1:left =True

elif event.value>=1:right =True

else:

right =False left =False

lightOn =False #Value to use with b button light control

#Infinite Loop while True:

#Get an event from the event system game_controller(pygame.event.get())

#Motor controls beased on directional pad values if up:#Forward

GPIO.output(17,GPIO.HIGH) #Left Forward GPIO.output(27,GPIO.HIGH) #Right Forward

elif down:#Backward GPIO.output(18,GPIO.HIGH) #Left Backward GPIO.output(22,GPIO.HIGH) #Right Backward

elif right:#Right

GPIO.output(17,GPIO.HIGH) #Left Forward GPIO.output(22,GPIO.HIGH) #Right Backward

elif left:#Left

GPIO.output(18,GPIO.HIGH) #Left Backward GPIO.output(27,GPIO.HIGH) #Right Forward

else:

GPIO.output(18,GPIO.LOW) #Reset GPIO.output(17,GPIO.LOW) GPIO.output(27,GPIO.LOW) GPIO.output(22,GPIO.LOW)

if a:#If a is pressed, for holding light on GPIO.output(23,GPIO.HIGH) #Light1 GPIO.output(24,GPIO.HIGH) #Light2

else:#If a is released, for turning light off GPIO.output(23,GPIO.LOW) #Light1 GPIO.output(24,GPIO.LOW) #Light2

if b:#If b is pressed, for holding solid light if lightOn:#If the light is on

GPIO.output(23,GPIO.LOW) #Light1 GPIO.output(24,GPIO.LOW) #Light2 lightOn =False #Declare that the light is off

else:#If the light is off GPIO.output(23,GPIO.HIGH) #Light1

GPIO.output(24,GPIO.HIGH) #Light2 lightOn =True #Declare that the light is on

if y:#If Y button is pressed

#Scan distance to ground with ultrasonic sensor u =ultrasonic.distance

print u

The only changes made to this program were the addition of comments throughout the program, and the deletion of unnecessary code segments.

Testing Prototype 3

Using a standard kitchen scale, I recorded the mass of the robot to be 0.26 kg. The mass of prototype 3 was significantly reduced compared to every other model. Prototype 3 was measured to be 14 cm long x 9 cm wide x 12 cm tall. Prototype 3 was the smallest of the prototypes and was more than a factor of two smaller than prototypes 2a &2b. Prototype 3 had the ability to attach to ferrous surfaces and was able to move on ferrous surfaces of speeds of

0.18 meters/second, making it the fastest prototype. Prototype 3 mounted 3 cameras, and all of them sent back acceptable feeds. Prototype 3 cost $175 to build compared to the $120 of prototype 2a and the $175 of prototype 2b. This can be attributed to the cost of cameras and the cost of smaller motors. Sample images from the three cameras are shown in Figure 11 and the results of robot testing are shown in Tables 1 and 2. The final prototype can be seen in Figure 12.

Source:Design and Development of a Low-Cost Inspection Robot


制造工艺

  1. 你如何聘请最好的工业产品设计和开发公司?
  2. 医疗产品设计:提示和技巧
  3. 硅谷产品开发 2018
  4. 关于产品开发的三个事实
  5. 开发套件加速物联网设计
  6. 内部研发
  7. XMOS startKIT:构建 XMOS 和 Raspberry Pi 机器人 XMP-1
  8. 使用 KINECT 和 RASPBERRY PI 的 SONBI 机器人人体检测
  9. 5G 设备的设计和开发:5G 性能范围
  10. PM 开发和执行快速指南
  11. HVAC 检查和维护标准概述
  12. 配电系统规划与设计