[ { "Id": "2", "CreationDate": "2016-12-06T17:13:40.417", "Body": "

Can someone explain to me the differences between Diamond, Platinum and Gold membership at the OCF?

\n", "Title": "What are the differences of the Open Connectivity Foundation membership levels?", "Tags": "|ocf|", "Answer": "

Honestly, not much really - only the price difference to become a member, better advertising and more power in the organization.

\n\n

The corresponding form to become a member is here. They all must follow the bylaws shown here.

\n\n
\n

SECTION 14.1 DIAMOND MEMBERS

\n \n
    \n
  1. The Corporation shall have Diamond Members. Any applicant qualified under Section\n 12.2 wishing to become a Diamond Member after the Organizational Meeting must be approved via 3/4 vote of all current Directors\n appointed by Diamond Members with such vote occurring via electronic\n means. Following an affirmative vote of the Diamond Members, an\n applicant for Diamond Membership shall be admitted to membership upon\n execution of a Diamond membership agreement.
  2. \n
  3. Diamond Members who remain in good standing shall be: a. Perpetually eligible to appoint a representative to the Board of\n Directors of the Corporation in accordance with these Bylaws; b.\n Eligible to have a representative appointed or elected as an officer\n of the Corporation; c. Eligible to participate in the Work Groups of\n the Corporation and have a representative chair the same; and d.\n Subject to procedures and requirements as may be adopted by the\n Corporation, eligible to seek certification of the Member\u2019s products\n and/or services and use the Corporation\u2019s trademarks in connection\n with the Member\u2019s certified products or services.
  4. \n
  5. Diamond Members may be downgraded to Platinum Members (or at their option Gold Members) upon unanimous vote of all Directors appointed by\n Diamond Members, less one (1), when such Directors determine, after\n affording the Diamond Member in question the right to be heard on the\n issue, that the Diamond Member has failed to actively contribute to\n the work of the Corporation.
  6. \n
\n \n

SECTION 14.2 PLATINUM MEMBERS

\n \n
    \n
  1. The Corporation shall have Platinum Members. Any applicant qualified under Section\n 12.2 wishing to become a Platinum Member shall be admitted to membership upon its execution of the appropriate Membership Agreement.
  2. \n
  3. Platinum Members who remain in good standing shall be:

    \n \n
      \n
    • Eligible to have a representative elected to the Board of Directors in\n accordance with these Bylaws;

    • \n
    • Eligible to have a representative\n appointed or elected as an officer of the Corporation; PAGE - 21\n BYLAWS OF OPEN CONNECTIVITY FOUNDATION, INC.\n PDX\\127875\\200109\\TFH\\17626740.1

    • \n
    • Eligible to participate in the\n Work Groups of the Corporation and have a representative chair the\n same; and

    • \n
    • Subject to procedures and requirements as may be adopted\n by the Corporation, eligible to seek certification of the Member\u2019s\n products and/or services and use the Corporation\u2019s trademarks in\n connection with the Member\u2019s certified products or services. SECTION
    • \n
  4. \n
\n \n

14.3 GOLD MEMBERS

\n \n
    \n
  1. The Corporation shall have Gold Members. Any applicant qualified under Section 12.2 shall be admitted to membership upon its execution\n of the appropriate Membership Agreement.
  2. \n
  3. Gold Members who remain in good standing shall be: a. Eligible to participate in the Work Groups of the Corporation in a non-voting\n capacity; and b. Subject to procedures and requirements as may be\n adopted by the Corporation, eligible to seek certification of the\n Member\u2019s products and/or services and use the Corporation\u2019s trademarks\n in connection with the Member\u2019s certified products or services.
  4. \n
  5. The Corporation may also have Nonprofit/Educational Gold Members. Any entity who qualifies as a nonprofit or educational entity under\n the laws and regulations of its domicile jurisdiction may apply for\n Membership as a Nonprofit/Educational Gold Member. The Board of\n Directors shall review any and all applications for such level of\n Membership and may, in their sole discretion, accept such application\n upon a determination that the applicant meets the requirements for\n this membership subset. a. Noprofit/Educational Members who remain in\n good standing shall be entitled to all Membership benefits of Gold\n Members; provided, however, that Nonprofit/Educational Membership\n benefits (including but not limited to rights under the IPR Policy and\n the right to receive or review Confidential Information) shall not\n extend beyond employees of the nonprofit or educational entity. For\n avoidance of doubt, it is expressly understood that membership rights\n and benefits do not extend to members or participants in nonprofit\n entities, their employees or contractors, or to governmental entities\n other than the educational entity applicant.
  6. \n
\n
\n\n

It's a lot to process, but pretty much any member must comply with their guideline for devices, but as you go up you get more power to influence what the guidelines of the OCF are.

\n\n

Other than that, between Gold and Platinum, they seem to bump up how much space you get on the membership list.

\n" }, { "Id": "7", "CreationDate": "2016-12-06T17:51:21.647", "Body": "

I've got the Samsung SmartThings system installed in my house, but I've encountered a few situations where a new device (e.g. an in-wall outlet) couldn't be found through the SmartThings App on my Android phone. A temporary fix for me was to move one of my plug-in outlets about halfway to serve as a \"bridge\" between the SmartThings Hub and the new outlet.

\n\n

My question is if there is an alternate way of adding new objects to my smart home? For instance my phone? Alternatively, what wireless technology (e.g. ZigBee, Z-Wave, Bluetooth) is needed to complete these links?

\n", "Title": "Adding Devices to Samsung SmartThings", "Tags": "|smart-home|wireless|samsung-smartthings|", "Answer": "

Apparently Smart things uses Z-Wave protocol among others like Philips Hue.

\n\n

Maybe you should add your devices near the SmartThings hub for it to recognize them to check if they are compatible.

\n\n

However, be aware of distance, Z-Wave devices tend to always \"sleep\" and escape the network.

\n" }, { "Id": "14", "CreationDate": "2016-12-06T18:02:11.623", "Body": "

I have done a bit of home automation such as building a remote camera that can be turned on via SSH locally and publishes images on a Raspberry Pi run Linux server.

\n\n

I'm curious, though, as to what protocols are best followed when your security is behind a router. I've used things like Putty and opened ports so that I can tunnel in but I don't imagine these are the most secure methods.

\n\n

I'm wondering what protocols/tools are best used when accessing a home server system remotely.

\n", "Title": "What are the best security practices to secure a remote IoT camera?", "Tags": "|smart-home|security|raspberry-pi|", "Answer": "

SSH is a reasonable starting point, its essential that you use TLS encryption, and using putty for ssh access is one way to achieve this. A VPN is another. What is really important is that you use strong pasphrases or keys to access the devices within your network, and that you keep the gateway devices up-to-date.

\n\n

Using a non-standard port is kind of sensible, but does nothing to secure your network if you leave your device with a default (or common) password.

\n\n

If you want remote access, you need a port open to forward SSH (or something very similar). If you don't trust the security implementation on the camera (i.e. it's last firmware update was over about 6 months ago), then you need to use a VPN to create an isolated network segment for it. If it has WiFi, and old firmware, it might as well be wide open and public (at least for anyone in physical proximity).

\n" }, { "Id": "17", "CreationDate": "2016-12-06T18:06:08.170", "Body": "

Would it be unsafe or considered \"bad practice\" to replace a gas fireplace switch with an IoT enabled smart switch?

\n\n

I would like to think there would be no issue, but, I want to make sure that it's not going to break any fire safety codes in my house. However, it would nice from a home automation perspective to turn it on as necessary, utilizing other sensors.

\n", "Title": "Safety of a smart switch on a fireplace", "Tags": "|smart-home|wireless|safety|building-code|", "Answer": "

A little late. I installed one such switch on one of my gas fireplaces in 2017 using Fibaro Home Automation.

\n

There was no such regulation in New Zealand. Conditions may vary in other countries. But there are regulations about converting one over for automation.

\n

I wanted to do the same to the other gas fireplace which only had a wall switch for the fan. I enquired about converting this. Although the supplier had a kit to do this, he was not allowed to do it because it was for a different model / brand.

\n" }, { "Id": "19", "CreationDate": "2016-12-06T18:08:54.370", "Body": "

I am currently working on a project that includes Bluetooth communication between a mobile application (currently using the Ionic platform) and an embedded device. For comparison, our product is similar to a smart lock.

\n\n

Security is of utmost concern, and we are looking into ways to ensure our hardware and software are not hackable. What steps should we take to ensure our system is secure?

\n\n

Edit: Yes, we are currently encrypting communications, and using HTTPS when the device communicates with our server.

\n", "Title": "How do I secure communication between app and IoT device?", "Tags": "|security|bluetooth|mobile-applications|", "Answer": "

If you can have end-to-end TCP, then use end-to-end TLS (e.g. with HTTPS).

\n\n

Don't reinvent the wheel, especially when it comes to cryptography \u2014 most people get it wrong. Unless the device is too resource-constrained to support TLS, if you get down to the level of AES, you're doing it wrong. #1 mistake is to encrypt and forget to authenticate \u2014\u00a0if you have an encrypted channel between your server and a man-in-the-middle, instead of an encrypted channel between your server and your device, then encryption hasn't provided any benefit. If you can't use TLS, make sure that whatever protocol you're using authenticates everything, and encrypts what needs to be confidential.

\n\n

To use TLS securely, think about what guarantees you need to have, from the point of view of each participant. Generally the device needs to know that it's talking to the legitimate server. That means that it must check the server's certificate. The device must have the X.509 certificate of a certificate authority recorded as trusted; this requires storage that can't be modified by an attacker, but it doesn't require any confidentiality of the storage. Note that you shouldn't hard-code the server's certificate directly, because that wouldn't let you easily replace that certificate if it's compromised. Instead, the device stores the expected identity (host name) of the server, and the certificate of a certificate authority that guarantees that a certain public key belongs to the expected host name. Once again, don't reinvent the wheel, rely on the certificate checking provided by your TLS library or application.

\n\n

If the server needs to know that it's talking to a legitimate client, then each client needs to have its own client certificate. That requires confidential storage on the client. Pass the client certificate to the TLS session opening function from your TLS library, or set it in the application configuration.

\n\n

That takes care of securing the communication between your server and your device. If the mobile application can talk to the device directly (e.g. to allow disconnected operation while it's on the local wifi network), you need to first perform pairing between the smart switch and the mobile phone. Pairing means an exchange of keys, preferably an exchange of public keys if resources permit, otherwise an agreement of secret keys. The goal of this pairing is to ensure that each device knows who it's talking to.

\n\n

You'll need to secure the control channel as well, whether it goes directly from the mobile device to the smart switch or via a server. Think about authorization: are there different levels of access to the switch, e.g. a control level that allows doing reconfiguration and a basic channel that just allows on/off switching? This is generally handled by an authentication step after establishing the secure channel (TLS if possible).

\n\n

Another consideration is firmware updates. That's a tricky one: on the one hand, there's no such thing as absolute security, so you will have security patches to apply now and then. On the other hand, a firmware upgrade mechanism is a complex thing and might itself have bugs. At the very least, make sure that your firmware upgrades are signed. Relying purely on the security of the communication channel for upgrades is dodgy, because the trusted based for a secure channel is larger than for a static security verification, and sometimes you might want to apply firmware updates without a network connection. In addition to verifying the signature, ideally you should have some protection against rollback, so that an adversary can't just push an \u201cupdate\u201d to version 1 on a system running version 2 and then exploit the security hole that version 2 patched.

\n" }, { "Id": "25", "CreationDate": "2016-12-06T18:17:51.117", "Body": "

I'm planning to use a simple light switch, that is placed on the wall. The switch gets power from the battery or piezo and sends unique sequences of data (on on- and on off-events) over 433 MHz to the receiver, that is connected to my SmartHome-RaspberryPI.

\n\n

Since I'm living on the ground floor I have some considerations about the security. Someone could record and replay unique sequences, that the switch sends.

\n\n

Is it possible to improve the security using hardware or software?

\n", "Title": "Securing 433 MHz wall light switch", "Tags": "|security|smart-home|raspberry-pi|", "Answer": "

Use rolling code, similar to what garage doors use now. Here is an open source example.

\n" }, { "Id": "30", "CreationDate": "2016-12-06T18:22:52.167", "Body": "

When reading about IoT, I often stumble upon the phrase \"Industry 4.0\". But what is the exact definition of it? Is \"Industry 4.0\" just a phrase for an IoT application in an industrial environment?

\n", "Title": "What is the difference between IoT and \"Industry 4.0\"?", "Tags": "|definitions|industry-4.0|", "Answer": "

Another name for Industry 4.0 is the Industrial Internet of Things(IIoT), the updated version of IoT. IIoT or industry 4.0 is mainly used in the manufacturing industry and its use cases are driving significant operational and financial benefits for this sector. The image represents the following use cases of industry 4.0 in the manufacturing industry.\n\"\"

\n" }, { "Id": "35", "CreationDate": "2016-12-06T18:29:04.690", "Body": "

What is the best approach to partitioning IoT devices from non-IoT devices at home?

\n\n

I have heard that setting up separate networks, one for IoT devices and one for everything else, is a good approach. This can be thought of as a three router \"Y\" network set up. One router connects the home to the outside world and connects to two other routers. One of those routers is for IoT components and one for everything else.

\n\n\n", "Title": "Secure Home Network Partition for IoT Devices", "Tags": "|security|networking|", "Answer": "
\n

What is the best approach to partitioning IoT device in the home?

\n

I have heard of setting up separate networks, one for IoT devices and one for everything else.

\n

Is this approach secure enough?

\n
\n

Well, technically, there is no such thing as absolute security. Doing so should be technically safe enough, if we consider that your router doesn't have any vulnerabilities and can separate the network well enough. Enable "Client Isolation" (or whatever your router company calls it) if your router supports it (It may be under NAS, Firewall or Wireless menus):

\n

\"Client

\n
\n

Are IoT devices a major security risk to have on the same network as my PC?

\n
\n

Kinda. While I haven't read of such an attack yet, IoT devices can technically be used to sniff the Internet communications or to make use of vulnerabilities on your operating system or the software you have installed to take it over. This can be a bigger risk if your PC has SSH or Telnet port open, and especially if it uses an insecure password. This was actually abused by the Mirai botnet: It attacked telnet ports of devices and tested common passwords to try to take over the device, to include it in the botnet. But this requires a vulnerability in both the IoT device and your computer, so it's unlikely to happen.

\n" }, { "Id": "41", "CreationDate": "2016-12-06T18:32:38.917", "Body": "

I have a Raspberry Pi running Jarvis, a personal IA that I can use with my voice to control my smart home devices.

\n\n

However, the voice recognition is far from perfect. They have a list of speech-to-text (STT) services that I could use.

\n\n

Should I be worried about privacy if a choose a better service?\nWhat can I do to improve the current service I use? Is buying a better microphone a good idea?

\n", "Title": "How to Enhance the Voice Recognition on Raspberry Pi controlling Smart Home Devices", "Tags": "|smart-home|raspberry-pi|voice-recognition|", "Answer": "

You could try Raspberry Pi with .NET Core. Microsoft has published decent automation libraries, including speech recognition. This is of course more hands-on, but combine it with other sensors and software, and you could do some pretty cool stuff.

\n\n

Microsoft also utilize their speech recognition libraries in their Bing APIs, so you could potentially use the web service to drive your project. This then relies on basic internet security through authorization and other basic security protocols.

\n" }, { "Id": "44", "CreationDate": "2016-12-06T18:34:04.143", "Body": "

I am developing a device which measures temperature, humidity and mass. Currently it uses HTTPS to upload data to a remote server. Now I know that there is a protocol called MQTT which is claimed to be the \"protocol of Internet of Things\".

\n\n

In what case and why should I switch from HTTPS to MQTT?

\n", "Title": "When and why to use MQTT protocol?", "Tags": "|mqtt|protocols|data-transfer|https|", "Answer": "

why and when to use MQTT:

\n
    \n
  1. When you have a server capable to run MQTT Broker/server (obviously). You can install on raspberry pi or compile Erlang MQTT broker on any other SBC with ARM linux.
  2. \n
  3. When you are going to make use of the topic wildchars. Explained in Goufalite answer.
  4. \n
  5. When you are going to make use of the persistent feature. For example publish with persistent on /device/board_1/should_be_active in day 1 and nothing published to that topic ever again. When a new device subscribed to that topic on day 2 or any other day, he will receive the data that was published on day 1. Even when the broker restarted, that persistent data on that topic will not be lost and will be informed again to the subscribers.
  6. \n
\n

When you don't want to use MQTT

\n
    \n
  1. You don't really need those above features

    \n
  2. \n
  3. You don't need encryption. For example in a car, not all data is encrypted, especially their data send through CAN.

    \n
  4. \n
  5. You don't want to spend resource on making an MQTT broker. if you can use microcontroller only why should you.

    \n

    You can use other server like CoAP (explained by Pepe Bellin).\nNodeMCU firmware support CoAP server, means you don't need expensive\nlinux powered server.

    \n

    You can use raw socket UDP or TCP-IP and encrypt the data yourself. This way you don't need to use HTTPS.

    \n
  6. \n
\n

Years ago i always use MQTT when working on communication between devices. But nowadays in order to be more efficient on resources I always prefer raw socket, even when communicating with Android devices.

\n" }, { "Id": "66", "CreationDate": "2016-12-06T18:50:25.190", "Body": "

Basically when it comes to IoT, the two main communication methods which come to my mind are either Bluetooth or Wi-Fi. I know that there are others as well like ZigBee, Z-Wave but I would like to stay at either Wi-Fi or Bluetooth as they are supported by smart phones and tablets by default.

\n\n

Application overview, system specifications:

\n\n\n\n

In general, I have a lot of devices with low bandwidth requirements and the main goal is to have a system that is as scalable as possible. So if I move to a twice as big house which needs almost twice as much sensors, I want it to be the easiest to install the additional ones.

\n\n
\n\n

Now I know the basic advantages and disadvantages of the two compared. It is listed at this site Bluetooth vs. Wi-Fi (the image is taken from here as well).

\n\n

\"Comparison

\n\n

To highlight: Some Bluetooth is cheap and easier to use. However, Wi-Fi is more secure, has a higher range and bandwidth but, of course, costs more.

\n\n
\n\n

So the question is: At the dawn of a project, how can one decide which one will be the more suitable for the task? I consider the scalability as the most important spec.

\n\n
\n", "Title": "When to use Wi-Fi over Bluetooth or vice versa in an IoT system?", "Tags": "|smart-home|bluetooth|wifi|", "Answer": "

Another factor to consider is radio packet size, which is much smaller for Bluetooth than for WiFi. This means that the collision risk is lower for Bluetooth than for WiFi, and a Bluetooth transmission is more likely to disrupt a WiFi transmission than vice versa.

\n" }, { "Id": "70", "CreationDate": "2016-12-06T18:53:54.020", "Body": "

A few years ago I helped crowdfund a Petzi Treat Camera to feed my pet. Due to a broadcast pushed firmware update that I missed, it was rendered unusable. Thankfully, they did however send me a replacement unit but I am now left with a second bricked one. The device by design communicates via Wi-Fi, has a camera and has a motorized trap door to dispense treats. I haven't opened it yet but I was thinking I might try gaining access to it or seeing if its components can be used elsewhere.

\n\n

I'm wondering whether there's a potential to reclaim/salvage parts of my old one and up/downcycle it or, if that's not possible, what would be the most environmental considerate way to dispose of this device?

\n", "Title": "Reusing / recycling a Petzi Treat Camera?", "Tags": "|hardware|sustainability|upcycling|disposal|", "Answer": "

Electronics are generally recyclable, that is not something specific to IoT devices. More specifically, your IoT device will typically provide a general purpose MCU.

\n\n

Recycling an MCU is generally feasible. Even if the bootloader is locked down, you may be able to wipe the entire device using JTAG. It is debatable how much value there is in re-using a several year old device, but the hardware should be viable for 5-10 years.

\n\n

Performance, security protocols, wireless protocols etc. would be the factors which would lag behind. You would also be working with a much more physically constrained platform than a small dev-board would provide.

\n\n

Generally, it will not be feasible to re-purpose the original firmware, even if this was open-source derived - there may be missing configuration details or drivers. You should plan on finding or writing the whole stack (but this may be a valuable learning exercise). There may be server-side code which would also not be re-usable.

\n" }, { "Id": "78", "CreationDate": "2016-12-06T19:07:12.187", "Body": "

Is legal to send my GSM IoT device which is powered by a ~2000 mAh (3.7 V) lithium polymer (LiPo) battery via air post? (Or even take it with me inside my suitcase?)

\n\n

Also does it make a difference if the device is powered on or off?

\n", "Title": "Can I send my GSM IoT device via airpost?", "Tags": "|batteries|gsm|", "Answer": "

As a GSM capable device you must switch it to Airplane mode. Which means the device RF interface must be completely shut down.

\n\n

Basically what you have described can be any smart phone, and I believe those can be delivered by air post as long as they are in airplane mode. But certainly if they are powered off.

\n" }, { "Id": "90", "CreationDate": "2016-12-06T19:21:46.890", "Body": "

In order to mitigate or manage the risk from having some of the devices on my home network compromised, is it feasible to monitor network traffic so as to detect a compromise?

\n\n

I'm specifically interested in solutions which don't require me to be a networking expert, or to invest in anything more than a cheap single-board computer. Is this a feature that can practically be integrated in a router firewall, or is the problem too difficult to bound to have a simple, easy to configure solution?

\n\n

I'm not asking about Wireshark - I'm asking for a self-contained system which can generate alerts of suspicious activity. Also thinking more focused on practical to setup for a capable amateur rather than a robust production quality solution.

\n\n

addendum:\nI see there is now a kickstarter project (akita) which seems to offer cloud-based analytics driven from local WiFi sniffing.

\n", "Title": "Can I monitor my network for rogue IoT device activity?", "Tags": "|security|networking|", "Answer": "

In short, standardization and product developments are underway to address this problem. Until then, there are few simple answers that don't require some networking knowledge.

\n\n

My humble suggestion is easy to implement, and will provide your local network with some protection (although it won't protect the Internet at large) without knowing anything about networking other than how to plug in and use a wireless router.

\n\n

Buy a separate wireless router for your home network, and use it just for your IoT devices. This will make it harder for the IoT devices to discover and attack your other devices (such as PCs, Tablets, and Smartphones). Likewise, it will provide your IoTs some protection from compromised computing devices you may have.

\n\n

This solution may break some things, but the solution is perversely helped out by the mostly undesirable reality that today, many Iot devices achieve remote communications through a manufacturer-controlled cloud infrastructure, which will help your Iots to communicate with your computing devices more safely than having them on the same network. It also allows the manufacturer to collect personal information about you, and provide that to third parties.

\n" }, { "Id": "96", "CreationDate": "2016-12-06T19:32:33.480", "Body": "

The controller in question is an STM32F030K6T6, which has an ARM\u00ae 32-bit Cortex\u00ae -M0 low power core, 32 kB Flash memory and 4 kB SRAM. It interfaces an SIM808 for Internet connectivity.

\n\n

The resources are quite limited regarding the memory.

\n\n\n\n

(I am not asking about a complete protocol stack implementation.)

\n", "Title": "How can I implement MQTT on an STM32F030K6T6?", "Tags": "|mqtt|microcontrollers|stm32|arm|", "Answer": "

The mbed mmqt library doesn't seem to document any memory requirements as likely to be limiting, and can reasonably be assumed to be targetted at this sort of small-footprint device as an endpoint. You could fairly trivially import the library into a similar device platform using the online compiler and check the code footprint at least.

\n" }, { "Id": "99", "CreationDate": "2016-12-06T19:45:29.950", "Body": "

IoT stands for Internet of Things.

\n\n

Would a fitbit or device that connects to a mobile phone or computer through Bluetooth the be considered IoT? What about radio controlled devices?

\n\n

What classifies a device as an Internet of Things device?

\n", "Title": "What classifies a device as IoT?", "Tags": "|definitions|", "Answer": "
\n

What classifies a device as an Internet of Things device?

\n
\n\n

As per my understanding, an IoT device:

\n\n
    \n
  1. Serves a specific use case, e.g., Amazon Echo, Nest Cam, Belkin WeMo. This is key differentiation from computing devices such computers, smartphones or tablets.

  2. \n
  3. Can connect to a network for purposes of data transmission/reception. Mostly an IP network.

  4. \n
  5. Is designed to be ubiquitous and inconspicuous. These devices are slowly coming up everywhere and serve the purpose without getting in the way of users.

  6. \n
  7. Do have some sort of cloud back end so that they can be reached from anywhere anytime by the users authorized to use the devices.

  8. \n
\n" }, { "Id": "100", "CreationDate": "2016-12-06T19:45:41.323", "Body": "

I did a prototype project for work a few years ago that utilized the Constrained Application Protocol (CoAP) for communicating with an Arduino board over a mesh network, but we put the brakes on the project due to a serious lack of security in our devices. We ended up abandoning CoAP for our project to move to an in-house protocol that we adapted for our needs.

\n

I've done a little digging around, and it looks like there are still a few implementations floating around, but I was curious if anyone is actually using CoAP in any products.

\n

Is CoAP still a good protocol to use, or has the industry settled on a de-facto standard?

\n", "Title": "Is CoAP still used for IoT devices?", "Tags": "|communication|coap|standards|", "Answer": "

Some of the improvements introduced with COAP might become less relevant with the introduction of HTTP/2 and HTTP/3 so will need to wait and see how the protocol war evolve.

\n" }, { "Id": "111", "CreationDate": "2016-12-06T20:22:47.717", "Body": "

I am interested in connecting a 3.5\" hard drive to an IoT device that I am building based on a microcontroller (e.g. an Arduino) which doesn't run a mainstream operating system.

\n\n

As far as I know, to use a 3.5\" hard-drive you need a full operating system (such as Linux or Windows) for device drivers to connect a 3.5\" hard drive. Is this assumption correct?

\n\n

Is there a way of implementing drivers for SATA/USB and the FAT filesystem, so that I can save files to a USB or hard drive? Are there any pre-existing projects or drivers that I can re-use for this purpose?

\n\n

I would prefer not to use an SD card because the capacity that I want will be more expensive.

\n", "Title": "Can I implement FAT on a microcontroller to access USB/SATA drives?", "Tags": "|usb|", "Answer": "

There is a project FatFs - Generic FAT File System Module that offers FAT access for microcontrollers.

\n\n
\n

FatFs is a generic FAT/exFAT file system module for small embedded systems. The FatFs module is written in compliance with ANSI C (C89) and completely separated from the disk I/O layer. Therefore it is independent of the platform. It can be incorporated into small microcontrollers with limited resource, such as 8051, PIC, AVR, ARM, Z80, 78K and etc.

\n
\n\n

Petit FatFs is for tiny (8 bit) microcontrollers.

\n\n
\n

Petit FatFs is a sub-set of FatFs module for tiny 8-bit microcontrollers. It is written in compliance with ANSI C and completely separated from the disk I/O layer. It can be incorporated into the tiny microcontrollers with limited memory even if the RAM size is less than sector size. Also full featured FAT file system module is available here.

\n
\n\n

So, yes, IoT devices that are based on microcontrollers that could run FatFs or Petit FatFs could access hard-disk drives without a full-blown OS.

\n" }, { "Id": "131", "CreationDate": "2016-12-06T21:43:10.920", "Body": "

I've have installed Z-Wave switches and outlets in a few places around my house. However, I noticed when purchasing the devices that there were a couple of different wireless options available in the brand I was looking at.

\n\n

I'd be curious to know some of the pros/cons between Z-Wave and ZigBee devices. A comparison like this post on when to use WiFi over Bluetooth would be amazing.

\n\n

For instance, I'm curious in information like if one style is potentially more favorable in houses with many walls or if one fairs better in \"noisy\" wireless homes (eg. many wireless devices/signal types).

\n", "Title": "Difference Between ZigBee and Z-Wave?", "Tags": "|wireless|zwave|zigbee|", "Answer": "

A tad late, but adding my experience for completeness and for a 2023 solution.

\n

I have a almost fully automated home using the Fibaro system. It uses Zwave. I decided to go with Fibaro

\n
    \n
  1. It was a more professional looking system
  2. \n
  3. I asked vendors in NZ to show me their last home/install. Only the Fibaro guy came to the party.
  4. \n
  5. I have 120+ devices and 100s of scenes to control lights, blinds, AV etc automatically, with a wall mounted tablet, with mobile apps and with Alexa/Google/Siri
  6. \n
\n

Their recent server Home Centre 3 now supports Zigbee and other devices as well. So, finally I can communicate to both Zwave and Zigbee and have inter-operable scenes.

\n" }, { "Id": "134", "CreationDate": "2016-12-06T22:33:44.307", "Body": "

I was reading about the tracking of parcels and other shipments, being a primary example of IoT applications. But I'm wondering about how reliable and precise the positioning would be. I have the impression that those tracking devices would not get any GPS signal in shipping containers, trucks or buildings, where they would be most of the time.

\n\n

Also, I see the same issue regarding the connection to a cellular network or alike, especially in regions with bad network coverage.

\n\n

So, how reliable is location tracking of shipments?

\n", "Title": "How reliable is location tracking of shipments?", "Tags": "|communication|gps|tracking-devices|", "Answer": "

You'd track on several levels.

\n\n

You track the truck with GPS, not the products. You track the individual shipments with zigbee or RFID. Those technologies are a lot cheaper. Thus, it's affordable to put them in or on less expensive products. Between commercial end points both sides can verify each product by the WPAN technologies on arrival or on packaging the truck.

\n" }, { "Id": "138", "CreationDate": "2016-12-06T22:46:38.490", "Body": "

According to the specifications, it is always the client who should establish connection to a server.

\n\n
\n

Client:

\n \n

A program or device that uses MQTT. A Client always establishes the\n Network Connection to the Server. It can

\n \n \n
\n\n

And if this client subscribes for an Application Message, then the server should forward those messages to this particular client.

\n\n
\n

Server:

\n \n

A program or device that acts as an intermediary between\n Clients which publish Application Messages and Clients which have made\n Subscriptions. A Server

\n \n \n
\n\n

Does this mean that if a client subscribes, then it remains connected to the server while the subscription is valid even though there are no data flow in most of the time?

\n\n

I come to this conclusion because if the client disconnects after subscription, then a server cannot forward messages to it because it is the client that should establish connection. But it won't know when to re-establish it.

\n", "Title": "Confusion about client-server connection establishment in MQTT", "Tags": "|mqtt|", "Answer": "

You should differentiate connection and session.

\n\n

Everything is defined by the session. When MQTT connection is authorized to the broker first time, broker creates session for this connection, usually based on client-id connection parameter.

\n\n

In MQTT 3.1.1 protocol (default currently in most clients/brokers) during connection you may specify clean=true or clean=false flag. If clean=true then broker will automatically create new session and close it when connection is broken/closed. If clean=false, broker will maintain session and deliver there events (into some kind of session storage) even when client disconnected. It depends on the brokers implementation if it allow clean=false session at all and what is maximum ttl of such session.

\n\n

In MQTT 5.0 protocol (very fresh, but perspective) it is possible to specify session ttl from client side or even change it after connection has been made. This is extremely useful for unstable WAN connections(IoT mostly) or stateful connections like you described.

\n\n

AFAIK currently MQTT 5.0 protocol from client perspective can be used in python with gmqtt and in javascript with mqtt.js.

\n" }, { "Id": "143", "CreationDate": "2016-12-06T22:55:54.177", "Body": "

Question: What is the underlining design behind an \u201cEmbedded Agent\u201d in relationship to low powered Internet of Things (IoT) edge devices?

\n\n

Some of the IoT cloud service vendors keep referring to installing an embedded agent on the sensor based edge devices. It appears to be a proprietary piece of software which vendors install on each device connecting to the cloud. Below are two images of software stacks with references to Agent. A portion of the of software stack reside in the microcontroller.

\n\n

\"IOT

\n\n

\"IOT

\n\n

Also here is very broad explanation Thingworx blog

\n\n
\n

An agent is an embedded program that runs on or near an IoT device and\n reports the status of some asset or environment. There is always some\n agent present in an IoT application. Typically the agent reads the\n status from sensors or local connectivity to an asset, applies some\n rules or logic about how often the sender has to aggregate the\n information, and then sends the information over a long-haul\n communications network to the server. This process can operate in\n reverse as well.

\n
\n\n

It is my assumption this agent consist of connectivity information such as IP address, server name, SSID type information to aid connectivity. Does these Embedded Agents have other functionality beyond providing connectivity?

\n\n

References:

\n\n\n", "Title": "What is an \u201cEmbedded Agent\u201d in reference to a Low Powered IoT Edge Device?", "Tags": "|definitions|", "Answer": "

Generally speaking, an agent is a 'bi-directional' piece of software; i.e., it reads parameters from the device and the communicates the same to cloud or even a gateway. More often than not, an OEM will control the libraries for development of the software to control the parameters of the device. Whereas, the OEM may choose any of the popular communication protocols (MQTT, HTTP, etc.) to publish the values read. Typically, integration of these two is the space where a System Integrator comes in.

\n\n

For example, an agent could be running on a Windows desktop to read the rpm of the fan every 5 seconds. This value is then communicated over to a cloud platform over an agreed protocol.

\n\n

Sample code from Paho MQTT (Python) web site:

\n\n
mqttc.connect(\"iot.eclipse.org\")\nmqttc.loop_start()\n\nwhile True:\n    temperature = sensor.blocking_read()\n    mqttc.publish(\"paho/temperature\", temperature)\n
\n\n

The above snippet is roughly an agent because there is the 'from device' part in the form of the function sensor.blocking_read() and the 'to cloud' part in the form of a mqttc.publish().

\n\n

Advanced agents will have mechanisms to handle offline storage, TLS support for communication towards cloud, respond to any updates from cloud (including reboots, if needed) gracefully, etc. And, in the specific case of this question, the agent will handle power constraints too. For example, respond to device level triggers such as sleep, wake-up, etc.

\n" }, { "Id": "149", "CreationDate": "2016-12-07T01:39:34.487", "Body": "

Following my previous question here, what are solutions to overcome issues with the data connection (i.e. data out off-quota or I'm out of coverage) during the work with my IoT devices. Especially when working tackling critical issues?

\n\n

This is focused more on mobile connectivity since I don't want this to be broad and off-topic. I asked this since in my country, Malaysia we quite regularly have the lagging issues compared to the US & other western countries.

\n", "Title": "Data connection issues during when working with IoT devices", "Tags": "|mobile-data|", "Answer": "

To begin with, if there are 'data issues', those issues should be taken up with the data service provider for violation of an agreed SLA. Next, there should be a offline data store option in your solution so that, the requirement of gathering data is taken care. If/when data coverage is available, the older data is republished or after a defined window, etc. As a last resort, the offline data could possibly have been downloaded via USB, etc.

\n\n

Thus, in summary, storing data offline, republishing when online and archival of the collected data is something should be considered in the overall solution.

\n" }, { "Id": "155", "CreationDate": "2016-12-07T07:17:05.313", "Body": "

We are planning to implement a proximity beacon network which provides information to the users app based on proximity in the store. Our objective is to cover a radius of 5-7 metres.

\n\n

Which protocol has the better connectivity between the beacon and the iOS/Android app?

\n\n

The Network will be as below,

\n\n

\"process

\n\n

To clarify, we are focusing on Eddystone over the other protocols because of Google. Considering the current situation in beacon technology, is there a better alternative for communication with mobile applications? If there are any, what is the advantage over Eddystone?

\n", "Title": "Connecting Proximity beacon with Mobile App", "Tags": "|protocols|beacons|eddystone|", "Answer": "

I don't think there is currently a better option out there. It doesn't even need an app to work since it can provide URLs, it's open-source and more secure than the main competitors due to ephemeral IDs and they provide telemetry.

\n\n

This blog lists a lot of reasons why the Eddystone beacons have risen so much in popularity. Between the lines one can even presume that they consider iBeacon dead barring some fundamentally changed successor.

\n\n

The Eddystone beacons just bring more ecosystem, accessibility and flexibility out of the box. So in December 2016, Eddystone seems to be the only sensible protocol choice. (Unless you intend to equip an Apple campus ;))

\n\n

A lot of blog entries I found moved over the year from \"let's compare the protocols\", over \"oh, Apple didn't even mention theirs in their keynote\" to \"the reasons why Eddystone won.\"

\n" }, { "Id": "176", "CreationDate": "2016-12-07T11:27:43.337", "Body": "

Raspberry Pi is broadly used for IoT, and there is a lot of software for it. But I would like to know about any completely open source designs, including software and hardware (PCB, not components).

\n\n

I have heard about Banana Pi but I'm not sure if it's completely open source or if other alternatives exist.

\n\n

The main requirement is to be fully compatible with any of the broadly existing software platforms (Arduino, Raspberry Pi).

\n\n

Of course, the components used in PCB should be available to anyone.

\n", "Title": "Are there any IoT devices with fully open source hardware?", "Tags": "|raspberry-pi|banana-pi|", "Answer": "

The Postscapes IoT Hardware Guidebook lists quite a few:

\n\n\n\n

Judging by the names, there are a few derivatives of the Arduino. Furthermore, all devices running Linux should be more or less compatible with the Raspberry Pi.

\n" }, { "Id": "191", "CreationDate": "2016-12-07T15:54:41.350", "Body": "

My wife and I just purchased a new house and the ADT Security sensors were still installed, but no keypad panel. After speaking with a Rep on pricing and logistics, we decided not to continue the service. I asked about the existing hardware and the Rep said that should we decide to continue service we can continue to use the existing hardware, otherwise someone can come and uninstall it. Anyways, I requested them to uninstall the hardware about a month and a half ago with no further response. So, as far as I can tell, I now have 3 Honeywell door sensors and 1 Honeywell Motion Detector (still installed).

\n\n

I'm curious if there is any way to include these sensors with my Samsung SmartThings system? Would it be as simple as adding/changing a wireless component to a compatible signal with ZigBee/Z-Wave/Bluetooth/etc.?

\n\n

Update:

\n\n

The model number for the entire unit seems to be A 026-0934, however I couldn't find it in the catalogue mentioned in the comments. There are several other model numbers it looks like, mostly referring to what appears to be a safety switch to cut off power when the unit is opened.

\n\n\n", "Title": "Repurpose ADT sensors", "Tags": "|samsung-smartthings|sensors|home-security|", "Answer": "

Give this one a try.

\n\n

It is talking about connecting your old wired sensor endpoints from the Alarm panel to Smartthings. There might be something that you can get from reading the post.

\n" }, { "Id": "196", "CreationDate": "2016-12-07T21:35:59.250", "Body": "

The smart household appliances that can be controlled via Home Connect can also be accessed via a developer SDK provided by the Home Connect Service provider. How can I find out what I will be able to do with a prospective oven, e.g. a Siemens iQ700 CM678G4S6B, by using the SDK provided by the developer of the service?

\n\n

I found this information in the SDK developer information, detailing that in general I can get information about several heating modes and a pizza mode. Will I be able to use those functions with the oven and how detailed will the information about the oven status be?

\n", "Title": "What capabilities does my Home Connect appliance provide in the developer SDK?", "Tags": "|smart-home|kitchen-appliances|home-connect|", "Answer": "

Currently, it appears that the SDK is still in beta. However, you can apply for it and get a feel for how it will work with their simulator.

\n\n

When it does finally come out, you should be able to do anything with it that you are able to do with the Home Connect application. As a matter of fact, the Home Connect SDK is called (right in their banner) Home Connect for Developers.

\n\n

In other words, anything you are able to do with the Home Connect application, you should also be able to do, with sufficient coding, as a developer with the SDK.

\n" }, { "Id": "203", "CreationDate": "2016-12-08T07:29:57.140", "Body": "

There are tons of tutorials on the web, especially with RabbitMQ, on how to publish sensor data; for example, temperature, humidity, etc. Just publish the value to a message queue and anybody can consume it.

\n\n

So far so good. But how about actuators?

\n\n

Let's take a light switch for example. The light switch publishes the current state of the luminaire to a queue. It also subscribes to a second queue to listen for events. This would allow a bidirectional communication. If someone/something wants to turn on the light, an event has to be published to the message queue the light switch is listening to.

\n\n

I hope you understand the idea. \nIs this the way to go with actuators? Is there any smarter solution?\nHow about security, thinking on using this for doors for example. Is it possible to publish a open door event from anywhere? How easily can it be hacked?

\n", "Title": "Is the Subscriber-Publisher pattern applicable also to actuators?", "Tags": "|security|publish-subscriber|actuators|", "Answer": "
\n

But how about actuators?

\n
\n\n

Yes pub-sub pattern is applicable to actuators.

\n\n
\n

Is this the way to go with actuators?

\n
\n\n

This is one of the ways to go and this is booming because of many cloud providers like

\n\n\n\n

trying to occupy the IoT space to move data from sensors to cloud easily with different approaches and as devices have limited connectivity, power, bandwidth, they need lighter weight protocol like MQTT and such which is pub-sub model based.

\n\n

What my point here is any device that can sense and has data can use pub-sub but the smart thing comes from the type of implemnatation they are doing. Suppose if you are not using MQTT over some encrypted mechanism(TLS/SSL) , the data can be sniffed.

\n\n
\n

Is there any smarter solution?

\n
\n\n

It depends on the application and the constraints the problem has and the so called smarter solution varies as time flies. One more thing to note here is that, having a smarter solution is not the smartest way to go around, because implementation is what matters the most and not the protocol or method you choose.

\n\n
\n

Is it possible to publish a open door event from anywhere? How easy\n can it be hacked?

\n
\n\n

Yes, it is possible to open the door from anywhere by publishing an event but this all depends on the application and authentication you are providing, for example you can make your application subscribing/publishing to topics is only after authentication.

\n\n
\n\n

Real Case Scenario:

\n\n

I know a lot of companies who are using this exact model for actuators, recently I worked for a team which is a part of Solar Tracking systems where the solar panels are controlled, monitored using Wireless technologies.

\n\n

Particularly in that to move/rotate an array of panels according to the sun position and based on different energy optimizing algorithms we use Linear Actuators, in this system we also have a provision to control panels manually from web/mobile dashboards in case of emergencies or any maintenance purposes.

\n\n

In the above scenario to control actuators Pub-Sub model with authentication/encrytption is used.

\n" }, { "Id": "211", "CreationDate": "2016-12-08T10:03:57.357", "Body": "

I've recently heard about the Mirai worm, which infects vulnerable routers, IoT devices and other internet-connected appliances with insecure passwords. Mirai is suspected of being the cause of some of the largest DDoS attacks in history:

\n\n
\n

Dyn estimated that the attack had involved \u201c100,000 malicious endpoints\u201d, and the company, which is still investigating the attack, said there had been reports of an extraordinary attack strength of 1.2Tbps.

\n
\n\n

The question Can I monitor my network for rogue IoT device activity? provides some useful generic tips for spotting malware on my IoT network, but how can I check if my devices are infected with the malware? Incapsula provide a tool to run which can scan for devices vulnerable to Mirai, but is there a way of autonomously checking if any devices on my network are infected (or provide real-time protection) so that I don't have to keep running the tool when I remember?

\n", "Title": "How can I check if my IoT devices are infected with the Mirai worm?", "Tags": "|security|networking|mirai|", "Answer": "

Mirai attacks embedded linux. You would first need to get command line access to your IoT device. After that you can check the checksums of the read-only filesystem, and compare them to clean firmware versions. Sometimes companies have the original firmware online, or you can contact them for a copy. If you want to understand how firmware is usually packaged, I suggest looking into the program Binwalk. OpenWrt has good documentation about flash memory. When you flash/reflash firmware onto the IoT device, sections of the firmware (kernel, read only root filesystem, writable config section) are stored in MTD partitions on the IoT's flash chip. You can copy/download these partitions (/dev/mtdblock1 is linux example) and compare those to the original firmware, via checksums. If you fear a rootkit and don't trust the command line, you can download & examine the firmware directly off the flash chip with hardware tools, like a Bus Pirate and SOIC8 clip

\n" }, { "Id": "221", "CreationDate": "2016-12-08T13:07:28.870", "Body": "

I've been reading about XMPP as a potential communications protocol for IoT devices but, after reading one source, I'm unsure whether it's really an appropriate protocol if you're concerned about overhead for each message.

\n\n

This source states:

\n\n
\n

However, XMPP has a number of problems that make it somewhat undesirable for EMBEDDED IOT PROTOCOLS. As an XML-based protocol, XMPP is very verbose, even more so than HTTP, and has heavy data overhead. A single request/response exchange to send one byte of data from an IOT CONNECTED DEVICE to the server is more than 0.5 kB.

\n \n

There is a draft specification that would compress XMPP using an XML encoding called efficient XML Interchange (EXI). But even with EXI, the same one byte of data gets hundreds of bytes of protocol overhead from XMPP alone. EXI is also a much harder format to process than other options now available. Because of these inherent problems, it is generally recommended to avoid using XMPP in embedded IoT applications.

\n
\n\n

However, XMPP promotes itself as suitable for IoT applications (although it doesn't specifically say that it's low-overhead), so it seems odd that such a large, seemingly verbose protocol would be recommended/promoted for IoT devices.

\n\n

Is the overhead of XMPP really as large as the source suggests for small amounts of data? For example, how much overhead would there be when sending an 8-byte message?

\n\n

Also, is the overhead so great if EXI compression is used (as mentioned in the source)? Would this also come with some pitfalls?

\n", "Title": "Does XMPP have a large overhead for IoT devices sending short, frequent messages?", "Tags": "|communication|networking|xmpp|", "Answer": "
    \n
  1. Many years ago I did analyse difference for using

    \n\n

    XML in payment network for payment transaction representation\n(card_number, date, time, terminal_id, and list of additional elements) in comparation with traditional

    \n\n

    bit-maped ISO8583

  2. \n
  3. XML has huge overhead. If you consider impact in networks with 10000+ nodes each of them sending 10+ messages hourly/daily to the central host then XML goes out and you really need something more \nefficient.

  4. \n
\n" }, { "Id": "230", "CreationDate": "2016-12-08T17:02:28.840", "Body": "

For some IoT devices, the data that needs to be sent is confidential, and hence sending it in plain text is not acceptable. Therefore, I've been considering how to encrypt data sent between IoT devices. An article I recently read on the RFID Journal website mentions the NSA-developed SPECK and SIMON ciphers as particularly suited to IoT applications:

\n\n
\n

NSA is making the ciphers [...] publicly available at no cost, as part of an effort to ensure security in the Internet of Things (IOT), in which devices are sharing data with others on the Internet.

\n \n

[...]

\n \n

NSA researchers developed SIMON and SPECK as an improvement on block cipher algorithms already in use that were, in most cases, designed for desktop computers or very specialized systems

\n
\n\n

Why should I select a newer algorithm such as SIMON or SPECK for my IoT device, especially for applications where power is constrained (e.g. battery power only)? What are the benefits compared to other encryption systems such as AES?

\n", "Title": "What, exactly, makes SPECK and SIMON particularly suitable for IoT devices?", "Tags": "|security|simon|speck|", "Answer": "

In \"The Simon and Speck Block Ciphers on AVR 8-bit Microcontrollers\" Beaulieu et al. investigate the implementation of SIMON and SPECK on a low-end 8-bit microcontroller and compare the performance to other cyphers. An Atmel ATmega128 is used with 128 Kbytes of programmable flash memory, 4 Kbytes of SRAM, and thirty-two 8-bit general purpose registers.

\n\n

Three encryption implementations are compared:

\n\n
    \n
  1. RAM-minimizing \n\n
    \n

    These implementations avoid the\n use of RAM to store round keys by including the pre-expanded round keys\n in the flash program memory. No key schedule is included for updating this\n expanded key, making these implementations suitable for applications where\n the key is static.

    \n
  2. \n
  3. High-throughput/low-energy\n\n
    \n

    These implementations\n include the key schedule and unroll enough copies of the round function in\n the encryption routine to achieve a throughput within about 3% of a fully-\n unrolled implementation. The key, stored in flash, is used to generate the\n round keys which are subsequently stored in RAM.

    \n
  4. \n
  5. Flash-minimizing\n\n
    \n

    The key schedule is included here.\n Space limitations mean we can only provide an incomplete description of\n these implementations. However, it should be noted that the previous two\n types of implementations already have very modest code sizes.

    \n
  6. \n
\n\n
\n\n

To compare different cyphers a performance efficiency measure - rank - is used. The rank is proportional to throughput divided by memory usage.

\n\n
\n

SPECK ranks in the top spot for every block and key size which it supports. Except for the 128-bit block size, SIMON\n ranks second for all block and key sizes.

\n \n

...

\n \n

Not surprisingly, AES-128 has very good performance on this platform, although for the same block and key size, SPECK has about twice the performance. For the same key size but with a 64-bit block size,\n SIMON and SPECK achieve two and four times better overall performance, respectively, than AES.

\n
\n\n

Comparing SPECK 128/128 to AES-128 the authors find that the memory footprint of SPECK is significantly reduced (460 bytes vs. 970 bytes) while throughput is only slightly decreased (171 cycles/byte vs. 146 cycles/byte).\nThus SPECK's performance (in the chosen metric) is higher than AES. Considering that speed is correlated with energy consumption the authors conclude that \"AES-128 may be a better choice in energy critical applications than SPECK 128/128 on this platform.\" The authors however are uncertain whether heavy usage of RAM access (high-speed AES implementations) are more energy efficient than a register-based implementation of SPECK. In either case a significant reduction in flash memory usage can be achieved which might be of relevance on low-end microcontrollers.

\n\n
\n

If an application requires high speed, and memory usage is not a priority, AES has the fastest implementation (using 1912 bytes of flash, 432 bytes RAM) among all block ciphers with a 128-bit block and key that we are aware of, with a cost of just 125 cycles/byte. The closest AES competitor is SPECK 128/128, with a cost of 138 cycles/byte for a fully unrolled implementation. Since speed is correlated with energy consumption, AES-128 may be a better choice in energy critical applications than SPECK 128/128 on this platform. However, if a 128-bit block is not required, as we might expect for many applications on\n an 8-bit microcontroller, then a more energy effcient solution (using 628 bytes of flash, 108 bytes RAM) is SPECK 64/128 with the same key size as AES-128 and an encryption cost of just 122 cycles/byte, or SPECK\n 64/96 with a cost of 118 cycles/byte.

\n
\n\n
\n\n

Additionally, this talk has an Enigma figure in it, who could resist a cypher that references Enigma?

\n" }, { "Id": "236", "CreationDate": "2016-12-09T12:28:02.783", "Body": "

I've got a Linksys Wireless Internet Monitoring Camera (WVC54GCA with the recent firmware), which I've setup at home. I configured it to send me 5 second short videos to my e-mail on any physical movements during my absence. Despite my effort to configure the best settings, I still get the thousands of e-mails every day (one every minute or more) with no movements on them, but only slight contrast changes.

\n\n

Here are examples of three different videos:

\n\n

\"Video\n\"Video\n\"Video

\n\n

Is there anything that I can do about this problem? Or I should buy a better one?

\n\n

To clarify, I want to receive the e-mails, but with valid physical movements on the attached videos.

\n\n
\n\n

Here are my configuration settings from /adm/image_fs.htm page:

\n\n

\"Linksys

\n\n

The White Balance options (if relevant) can be selected to: Auto, Indoor (Incandescent), Fluorescent (white light), Fluorescent (yellow light), Outdoor or Black & White.

\n\n

Settings at /adm/event_fs.htm page:

\n\n

\"Linksys

\n", "Title": "My home monitoring camera sends me thousands of e-mails every day", "Tags": "|digital-cameras|home-security|surveillance-cameras|", "Answer": "

Generally speaking you have to adjust the trigger for the motion detection. There's the sensitivity for the actual motion detection and that is as bravokeyl already says despite all technology most efficiently done by trial and error.

\n\n

That might not work the same for all times of day due to low sun, clouds and other factors changing the lighting. Also cats. Motion detection and cats mix badly. Looking at the now provided options of your settings I'd start with deactivating the low light sensitivity box and check the results. At day and night. This might turn the camera useless at night, but that has to be tested with the actual device.

\n\n

There is however another setting that might help you out that is listed in the handbook:

\n\n
\n

Interval Enter the time in minutes that must pass\n between motion detection events. Valid values are 0-5,\n 10, or 15. The default is 2. A value of 0 indicates no delay\n between events.

\n
\n\n

WVC54GCA User guide

\n\n

It seems like that setting is set to zero if you get that many mails. Maybe revert it to default or even to five minutes. That should definitely reduce the amount of mails.

\n" }, { "Id": "238", "CreationDate": "2016-12-09T12:57:28.410", "Body": "

At the beginning of 2014 I purchased a couple of coin sized TrackR devices to tag my belongings, so I can easily find them. I use an iPhone to connect to them.

\n\n

My main problem using these devices was that the ringer on these devices keeps on activating on its own at random without any reason, and it keeps ringing constantly until switched off (taking out of the battery from it).

\n\n

Is this kind of problem a common one? How can I avoid this issue? Or has this problem been addressed already in later versions of these devices? Obviously I don't want to buy the new version on the hope that it's going to work this time.

\n\n

To clarify, I didn't use it for the thief-like option to ring when it's separated, I've used only option for the tracking purposes. It was ringing especially when disconnected from the phone, but after some long time.

\n", "Title": "TrackR keep ringing on its own", "Tags": "|gps|tracking-devices|", "Answer": "

There's another idea, also from trackr's support page. Again, it's also a feature that can be enabled/disabled, and that is "Device alert." Basically, it warns when your phone can no longer make a bluetooth connection with your TrackR devices.

\n
\n

If your TrackR device and/or smart phone is sounding off at seemingly random times, you may have device and/or phone alerts enabled. When \u201cDevice alert\u201d is enabled your TrackR device will sound off when the bluetooth connection betwen your TrackR and phone is off. Conversely, when "Phone alert" is enabled your phone will begin to ring when bluetooth connection between your TrackR and phone is lost. To remedy this issue you can turn off phone and device alerts from the TrackR app. To turn off the alerts please follow these steps:

\n
    \n
  1. Open the TrackR app

    \n
  2. \n
  3. Click the icon in the top right corner of the app. This icon will look like 3 blocks stacked on top of each other.

    \n
  4. \n
  5. A menu will slide in with a list of each TrackR device you have connected with. Click the gear cog next to a listed TrackR device.

    \n
  6. \n
  7. Another menu will slide in where you will have the option to turn device and phone alerts on and off. Move them to the off position. Device and Phone separation alert need to be toggled off.

    \n
  8. \n
  9. Repeat the above steps for any other TrackR's you are paired with.

    \n
  10. \n
\n
\n

For more information, check out the article referenced.

\n" }, { "Id": "243", "CreationDate": "2016-12-09T13:52:52.440", "Body": "

I have a friend who has a Lyric T5 Wi-Fi thermostat which he was controlling from his iPhone via Apple Homekit. Recently, all commands from his phone stopped reaching his device, demonstrating that they were no longer connecting to each other. Why would this be happening and how could he fix it?

\n", "Title": "Lyric T5 does not respond to commands via Apple Homekit", "Tags": "|smart-home|apple-homekit|", "Answer": "

Okay, turns out he found the solution here:

\n
\n

If your commands are not responding, it means there is a disconnect between the app and Apple HomeKit. To resolve this use the following steps:

\n\n
\n

Apparently, doing this fixed his problem up entirely.

\n" }, { "Id": "250", "CreationDate": "2016-12-09T15:49:48.127", "Body": "

If I want my smart plug design to be a commercial product in the EU it must surely meet some requirements, regulations or directives.

\n\n

I know about the CE (Conformit\u00e9 Europ\u00e9enne) marking, which is mandatory in the European Economic Area. It means, if I can believe Wikipedia:

\n\n
\n

Most electrical products must comply with the Low Voltage Directive and the EMC Directive; toys must comply with the Toy Safety Directive

\n
\n\n

I am mainly concerned about the safety, as smart plugs and sockets have direct connection to mains voltage, live wires, which is always dangerous. Proper sealing is needed.

\n\n

I have checked The Low Voltage Directive (LVD) and I think it covers the safety requirements for a smart plug based on the below part.

\n\n
\n

The LVD covers all health and safety risks of electrical equipment operating with a voltage between 50 and 1000 V for alternating current and between 75 and 1500 V for direct current. These voltage ratings refer to the voltage of the electrical input or output, not to voltages that may appear inside the equipment.

\n
\n\n

Now, there are other directives as well. For example Measuring Instruments Directive. It also mentions \"Active electrical energy meters\".

\n\n

There is also a general product safety directive according to this list.

\n\n

All in all which of the above mentioned and not mentioned directives are mandatory for commercial smart plug designs in the European Union? The main concern is safety.

\n", "Title": "European Union Regulations for Smart Plugs", "Tags": "|smart-home|standards|safety|smart-plugs|ac-power|", "Answer": "

Most smart plugs I have come across are Wi-Fi controlled devices.

\n\n

I expect the regulations would be similar to those imposed on a Wi-Fi router such as Technicolor's TG582n.

\n\n
\n \n
\n\n

It's worth picking up the phone and talking to your local compliance house. I used CEI previously. It's not cheap but if you are serious and intend to sell your device, you will need to get someone to sign off on your product, otherwise it's a dangerous plug and not a smart plug.

\n" }, { "Id": "251", "CreationDate": "2016-12-09T16:27:00.090", "Body": "

A few years back I've purchased an Eye-Fi Pro X2 for my Canon EOS digital camera.

\n\n

Unfortunately, the camera has been stolen and at that time I had my photo sync disabled in Eye-Fi Center (which is now deprecated) and I couldn't do anything to activate it back remotely (either to re-activate sync of the photos or track the thief location based on the access point where the card is connected to). Now X2 is End of Life anyway.

\n\n

I'd like to know what I can do in the future to deal with similar situations using the latest version of Eye-Fi SDHC memory cards.

\n\n

In other words, I'd like to know whether it's possible to track the location of the lost camera remotely (assuming the camera is on with its card intact), or at least re-activating disabled photo sync remotely (assuming the card has been registered correctly). What should I activate or configure in order to prepare for the next potential incident?

\n", "Title": "How to track a stolen Canon EOS camera using SDHC memory card?", "Tags": "|geo-tagging|memory-cards|digital-cameras|", "Answer": "

As detailed in my answer to the precision question (https://iot.stackexchange.com/a/341/78) the current generation of sdcards doesn't seem to support inherent geo-location capabilities anymore. Furthermore, Eye-Fi was bought by Ricoh. Looking at their product portfolio they seem to prefer to include the GPS function in the camera right away or offer additional GPS modules.

\n\n

Have a look at this blog about geo-tagging photos. SD cards aren't even listed there anymore and I couldn't find any decent and recent sdcards with inherent geo-tagging. The trend seems to be different. The new Canon EOS 7D has inbuilt GPS already.

\n\n

How to get your camera back after it's been stolen is of course extremely model based and cannot be answered in general.

\n" }, { "Id": "256", "CreationDate": "2016-12-09T18:01:33.013", "Body": "

Weightless-W promotes itself as a \"low power wide area (LPWAN) star network architecture operating in TV white space spectrum\", and seems to suggest that this method of transmission has several favourable characteristics:

\n\n
\n

At the terminal level data rates from 1kbit/s to 10Mbit/s are possible depending on link budget with data packet sizes from 10 bytes and no upper limit with an extremely low overhead - 50 byte packets have less than 20% overhead.\n [...]

\n \n

An extremely wide range of modulation schemes and spreading factors provides flexibility in network design enabling 5km coverage to indoor terminals.

\n
\n\n

On the Which Weightless Standard page, it also states:

\n\n
\n

If TV white space spectrum is available in the location where the network will be deployed and an extensive feature set is required, use Weightless-W

\n
\n\n

The problem comes with determining if white space spectrum is available; how can I check where white space is open for IoT use with Weightless-W? Is there a tool I can use to determine this or a map? Also, would it be necessary to consider whether other IoT networks occupy some of the white space frequencies and the possibility that they could interfere?

\n\n

If it is useful in your answer, you can specifically focus on determining TV white spaces in the UK, although a more general solution would be interesting to read too.

\n", "Title": "How can I determine TV 'white space' frequencies for use with Weightless-W?", "Tags": "|networking|weightless|", "Answer": "

Wikipedia's article on Weightless states that the base station will determine an appropriate frequency to use by querying a database:

\n\n
\n

In networks using Weightless-W technology a base station queries a database which identifies the channels that are being used for terrestrial television broadcast in its local area. The channels not in use \u2013 the so-called white space \u2013 can be used by the base station to communicate with terminals using the Weightless-W protocol.

\n
\n\n

To mitigate interference from other IoT networks, Weightless-W can perform frequency hopping:

\n\n
\n

Operation in unlicensed spectrum requires good interference tolerance. Weightless employs a frequency hopping regime at a frame rate of 2s to avoid interference on congested networks and to limit the impact of interference to a single hop rather than degrading the entire transmission.

\n
\n\n

In the UK, Ofcom (the communications regulator) have published guidance on how to access approved TV white space databases for use with IoT devices, though this will be different in other countries.

\n" }, { "Id": "257", "CreationDate": "2016-12-09T18:32:35.510", "Body": "

To clarify what classifies a device as IoT on a specific example, are all flying drones (UAV) part of the Internet of Things? Or there is some minimum requirement to classify it as IoT? What's the stance of the relevant standardization organizations?

\n", "Title": "Are drones considered part of the IoT by any officials?", "Tags": "|definitions|drones|", "Answer": "

Possibly they fall under the more general, top-level groups of the \"Internet of Everything\". This groups includes the Internet of things but is more broad and includes devices such as PCs, tablets, industrial computers and so on. A drone is bit more complex than the accepted definition of a thing in the IoT allows for. For IoT think fridges, water heaters, smart meters, etc. For IoE think robots, PCs and all the other more complex devices.

\n" }, { "Id": "265", "CreationDate": "2016-12-09T23:16:51.260", "Body": "

I've got multiple Hue White bulbs, multiple Dimmer switches and a second generation Hue hub. Sometimes, pressing a dimmer button doesn't result in the bulb reacting in any way. Sometimes I have to press multiple times to switch the light on or off. Sometimes I even have to use the hard switch to get what I need. All 5 dimmers in my house have already (<1 year) exhibited this behaviour at least a couple of times. Some are worse than others.

\n\n

I think I've never seen my lights ignoring the Hue app.

\n\n

I wonder what the reason could be and what troubleshooting I could do. All distances are very reasonable (max distance between 2 nearest bulbs is <5 m). Usually the system works. Sometimes the devices that fail in this way are actually those located the closest to the hub.

\n\n

I would also like to know if the dimmer is actually connected to the bulb directly or via the hub.

\n", "Title": "Hue Dimmer Switch frequently \"ignored\" by the light", "Tags": "|philips-hue|", "Answer": "

I have seen this behaviour when my hub was located next to my wireless router. One of the antenna sleeves on the router came off and disrupted communications to the hub from my lights & dimmer switches.

\n\n

ZigBee and Wi-Fi frequencies overlap on certain channels (2.4 Ghz) so you could try to change the channel in the Hue app, if moving the hue hub further from the router isn't an option.

\n" }, { "Id": "268", "CreationDate": "2016-12-10T12:11:21.077", "Body": "

The Wi-Fi Alliance's relatively new Wi-Fi HaLow (802.11ah) specification seems to be ideal in some characteristics for IoT devices:

\n\n
\n

Wi-Fi HaLow will enable a variety of new power-efficient use cases in the Smart Home, connected car, and digital healthcare, as well as industrial, retail, agriculture, and Smart City environments.

\n \n

Wi-Fi HaLow extends Wi-Fi into the 900 MHz band, enabling the low power connectivity necessary for applications including sensor and wearables. Wi-Fi HaLow\u2019s range is nearly twice that of today\u2019s Wi-Fi, and will not only be capable of transmitting signals further, but also providing a more robust connection in challenging environments where the ability to more easily penetrate walls or other barriers is an important consideration.

\n
\n\n

However, as mentioned in the linked source, HaLow operates in the 900MHz frequency, which, according to eWeek, is an unlicensed frequency:

\n\n
\n

Unfortunately, the new HaLow standard doesn't have its frequencies to itself. Because the 900MHz band is shared with other licensed services, the new WiFi band is subject to interference from other users and there is no remedy when that interference happens.

\n \n

For example, if a ham radio operator next door goes on the air with a powerful signal that wipes out your smart thermostat, you're out of luck. Because you're an unlicensed service, you're required to accept that interference.

\n \n

However, if your smart thermostat happens to cause interference to that ham radio operator next door, then you're required to stop doing it. As an unlicensed user, you have few rights to the spectrum if someone else wants to use it.

\n
\n\n

Presumably this is related to the FCC rules which are commonly seen on RF products:

\n\n
\n

This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.

\n
\n\n

Does this make HaLow too problematic for use as a communication method, since my transmissions could easily be forced to stop if someone else was transmitting in that frequency? If I wished to design a device using HaLow, how could I avoid interference which would require me to stop broadcasting?

\n", "Title": "Is Wi-Fi HaLow unsuitable for IoT applications because it operates in unlicensed frequencies?", "Tags": "|networking|standards|wifi-halow|", "Answer": "

The WiFi we know and use now also share the 2.4 GHz frequency range with a lot of other technologies and applications which might interfere. If we have a look on the list of 2.4 GHz radio usage, a couple of items are there beside WiFi.

\n
\n\n
\n

Moreover the 2.4 GHz band has the following licensed users:

\n
\n

FIXED, MOBILE, RADIOLOCATION, Amateur & Amateur-satellite service

\n
\n

So accepting interference is pretty much the case right now too.

\n

From The Verge's article, "There's a new type of Wi-Fi, and it's designed to connect your smart home":

\n
\n

It'll be in the 900MHz range, which has better reach and penetration than the 2.4GHz and 5GHz range that existing Wi-Fi operates in. (But, like existing Wi-Fi, it'll be in operating in unlicensed spectrum, so there may be interferences.) There does, of course, have to be a downside. And there is: HaLow isn't going to be as good at quickly transferring data. This isn't Wi-Fi for browsing the web; it's for transferring small bits of data on infrequent occasions. Device manufacturers can, to some extent, customize HaLow to their needs to get faster transfers, but that'll happen at the expense of battery life.

\n
\n

So as WiFi HaLow is designed for lower bandwidth and for battery saving low-power applications it might a be better solution if we are already talking about interference. As it is intended to transfer less data then original WiFi.

\n" }, { "Id": "271", "CreationDate": "2016-12-10T14:47:02.787", "Body": "

Ubuntu Core, Canonical's latest version of Ubuntu for IoT devices, says that its new Snappy package manager is ideal for the Internet of Things, and Wikipedia says that:

\n\n
\n

Snappy packaging has been deployed in internet of things environments, ranging from consumer-facing products to enterprise device management gateways

\n
\n\n

However, package managers on Linux aren't a new thing by any means - APT has been around since mid-1998 - so why is Snappy considered so much better by Canonical for IoT? Are other package managers' security practices unsuitable for IoT or is there another factor that is more important?

\n", "Title": "What makes Ubuntu Core's \"Snaps\" better than normal packages for IoT devices?", "Tags": "|ubuntu-core|snappy|package-managers|", "Answer": "

There are two advantages as far I know, here they are:

\n\n

First: Snap packages can bring their own dependencies with them. So no dependency hell.

\n\n

Second: Snap packages can be installed for one user only. So more control of who is running that software.

\n\n

Some quotes (including source links):

\n\n

from https://insights.ubuntu.com

\n\n
\n

Snaps are isolated from one another to guarantee data security, and\n can be updated or rolled back automatically, making them perfect for\n connected devices. Multiple vendors have launched snappy IoT devices,\n enabling a new class of \u201csmart edge\u201d device with IoT app store. Snappy\n devices receive automatic updates for the base OS, together with\n updates to the apps installed on the device.

\n \n

(source)

\n
\n\n

from https://developer.ubuntu.com

\n\n
\n

Ubuntu Core is in many ways simply another flavor of Ubuntu (eg, the\n root filesystem is built from packages from the same Ubuntu archive as\n other flavors), but it differs in many important ways:

\n \n \n \n

The above qualities aim to address many of the challenges inherent in\n the traditional Linux distribution model and greatly increase\n reliability, predictability and security.

\n \n

(source)

\n
\n\n

from http://snapcraft.io/

\n\n
\n

A snap is a fancy zip file containing an application together with its\n dependencies, and a description of how it should safely be run on your\n system, especially the different ways it should talk to other\n software.

\n \n

Most importantly snaps are designed to be secure, sand-boxed,\n containerized applications isolated from the underlying system and\n from other applications. Snaps allow the safe installation of apps\n from any vendor on mission critical devices and desktops.

\n \n

(source)

\n
\n" }, { "Id": "277", "CreationDate": "2016-12-10T16:29:42.363", "Body": "

I've been interested in the applications of Docker on IoT devices such as Raspberry Pis.

\n\n

After reading A Performance Evaluation of Container Technologies\non Internet of Things Devices, I was slightly confused by one of the results. In Table 1, the power consumption shown under Apache 2 Benchmarking (200 clients) shows that using a Docker container reduced power consumption, despite the overhead of containerisation using Docker.

\n\n

Why does this occur? Is this reliable enough to be used to slightly reduce power consumption of IoT devices, and would there be any drawbacks?

\n", "Title": "Why does Docker reduce power usage on an Internet of Things device in this benchmark?", "Tags": "|raspberry-pi|docker|power-consumption|linux|", "Answer": "

After some further investigation, I think the issue in the question is that although the power (rate of energy transfer) was reduced, the overall energy consumption was increased by using Docker, so there is no benefit in terms of reduced electricity costs.

\n\n

Based on the paper's figures for 100,000 requests, we can calculate the energy usage through the formula:

\n\n
\n

Energy = power x time

\n
\n\n

Given that the native code consumed 2.4893 W of power, and took approximately 170 seconds (see Figure 3, Native 200), we know that the energy used was:

\n\n
\n

2.4893 W * 170 s

\n \n

= 423.181 Ws = 423.181 J (1 watt-second is equivalent to a joule, or, in other words, a watt is a joule per second)

\n
\n\n

For the Docker code, the power usage was 2.3642 W, but the time taken was 220 seconds, so:

\n\n
\n

2.3642 W * 220 s

\n \n

= 520.124 Ws = 520.124 J

\n
\n\n

Hence, the overall energy usage for the example was 96.943 J higher, which is clearly undesirable if energy usage is a concern. However, using Docker does have other advantages for deployment and management, but in tightly constrained environments (e.g. battery-only), it would seem that it is best avoided.

\n" }, { "Id": "278", "CreationDate": "2016-12-10T21:21:05.620", "Body": "

I have a LightwaveRF thermostatic radiator valve, and a hub to connect this to the internet. Setting a schedule seems particularly problematic. This is the valve in question.

\n\n

I have the option of an Android app, or the website, each seems to have their own problems around the 6am point which is designated as the end of day.

\n\n

If I set a schedule using the web interface, it doesn't seem to be synced across to the mobile view.

\n\n

Is there any reliable way to edit these schedules? I see there are old comments in various places which confirm these are not unique problems. I'd rather not set up a dedicated server for control, but it's an option.

\n\n

I want a profile like this:

\n\n
5am-7:30    20\u00b0C   \n10pm-11pm   20\u00b0C\n11pm-5am    18\u00b0C\n
\n\n

The design of both UIs appears to insist there are no active settings spanning 6am. It seems very easy to enter parameters which are detected by the web page as conflicting between one day and another. Even setting 5am seems to confuse the system - it gets automatically put in the previous day.

\n", "Title": "Schedule for Lightwave RF thermostat", "Tags": "|smart-home|lightwave-rf|", "Answer": "

Well, there is an online scheduling service you could use. If this then that offers date time applets that can trigger certain events. They also do have Lightwave RF heating recipes that prove that IFTTT works with the valve in general.

\n\n

If you are not opposed to writing an IFTTT recipe you might be able to create a recipe there where you can create the schedule - maybe even a configurable schedule - that triggers your valves. The [API of the valve] seems to indicate that the commands should be available in theory.

\n\n

This might be a way. I'll try to detail it a bit more later on.

\n" }, { "Id": "281", "CreationDate": "2016-12-11T12:52:57.260", "Body": "

I want to use English Alexa skills on a German Echo Dot. But when I switch to US English to access English skills I get this problem:

\n\n
\n

Die ausgew\u00e4hlte Spracheinstellung stimmt nicht mit den Einstellungen Ihres Amazon-Kontos \u00fcberein.
\n Daher werden Sie nicht auf Skills zugreifen k\u00f6nnen.
\n Um auf Skills zuzugreifen und die optimale Spracherfahrung zu nutzen, gehen Sie bitte in den Einstellungen zur Sprachoption, und w\u00e4hlen Sie Deutsch aus.

\n
\n\n

Translation:

\n\n
\n

The language you selected does not match your Amazon account settings.
\n As a result, Alexa skills are not available.
\n To get the best Alexa experience, please choose German by selecting continue.

\n
\n\n

I switched the account settings but that did not yield in any positive results. How do I use the bigger Alexa skill base that exists for English?

\n", "Title": "How to use English Alexa Skills on my German Echo Dot", "Tags": "|amazon-echo|alexa|", "Answer": "

Okay, so this is a bit of a mess, but this is how I did it.

\n\n

First, I tried with my German account to link it and had the same problem with skills. I could use Alexa in English, but not install skills. Frustrating.

\n\n

Next, I created a second Amazon account in the USA and linked my Echo Dots, giving me access to the skills. Of course, I no longer have access to prime music, but the skill library was more important to me than my German Amazon content. So that's a choice you need to make.

\n\n

Even after I installed the skills, some weren't functioning correctly, especially Plex and Harmony with finding my devices. Again, it seems that the German Alexa app (on iPhone) was causing problems, as I would get another language warning from time to time. So I had to switch my iTunes back to the USA store and redownload the Alexa app, so I had the USA version. Now, everything is working perfectly. So if you go this route, there are probably other skills that won't work correctly with the German version of the Alexa app.

\n\n

Hope this helps!

\n" }, { "Id": "286", "CreationDate": "2016-12-11T14:38:28.760", "Body": "

Many common IoT communication protocols I've researched have adopted a mesh topology (for example ZigBee, Thread and Z-Wave), which is a significant contrast to the usual star topology of Wi-Fi, where every device connects to one router/hub.

\n\n

EETimes also state that:

\n\n
\n

Mesh networking is emerging as an ideal design solution for interconnecting a large number of network devices.

\n
\n\n

EETimes suggest that reliability improvements (e.g. self-healing transmissions) are one of the main advantages of a mesh network, though this seems like a small advantage compared to the added complexity of setting up a mesh network.

\n\n

For a home IoT network which is likely to contain about 10-20 networked devices and spread a short range from end-to-end, what makes mesh networks more suitable than a regular star topology? Is the added complexity not as significant as I seem to believe it is?

\n", "Title": "Why are mesh networks used more frequently for IoT networks?", "Tags": "|networking|topologies|mesh-networks|", "Answer": "

Mesh networks tend to give better local configuration options for an IoT network. Range extension was already mention which the mesh network as each device in the network will help make the network bigger than each device would be alone. Another important aspect is how the messages are routed. While some devices will remain stationary, others may be moved around. This could make routing messages in a typical way difficult, but a mesh network will be able to handle it better as all the nodes will be able to look for the device in question.

\n\n

A practical example of this in work is in a ZWave network you can rediscover all the nodes in your network so the controller and other nodes can figure out the best path for messages and which nodes can talk to which with and without relaying the message on a different device. More information on this can be seen on this page on the \"Mesh and Routing\" section.

\n" }, { "Id": "287", "CreationDate": "2016-12-11T17:01:25.687", "Body": "

In May 2018, the European Union's General Data Protection Regulation will come into force, and EU citizens will be given additional rights in regards to their data. As well as this, data controllers (organisations that collect data on users) will have additional obligations placed upon them.

\n\n

Interestingly, one of the new rights given to users is the right to data portability. Wikipedia defines it like so:

\n\n
\n

A person shall be able to transfer their personal data from one electronic processing system to and into another, without being prevented from doing so by the data controller. In addition, the data must be provided by the controller in a structured and commonly used electronic format. The right to data portability is provided by Article 18 of the GDPR. Legal experts see in the final version of this measure a \"new right\" created that \"reaches beyond the scope of data portability between two controllers as stipulated in Article 18.\"

\n
\n\n

For the purpose of this question, take the example of a smart health tracker (such as a FitBit). Will I be able to export data from my FitBit tracker and then import the data into a competitor's tracker?

\n\n

In addition, how will I be expected to comply with this regulation if I design my own IoT device that synchronises with the Internet?

\n", "Title": "Will \"smart\" devices be required to allow import and export of data under the GDPR?", "Tags": "|standards|data-portability|", "Answer": "

I think this will be difficult to answer, ask three lawyers get four answers, not to mention that it is something in the future. However, I would argue, that not each device (in the technical sense) would be required to comply with this rule.

\n\n

Consider a use case where smart devices work without an external or cloud-based data service, e.g. a smart home system where IoT devices report to a central node in said home, but not uploading anything. In this case there is simply no data controller.

\n\n

If on the other hand a system uses the data services of a data controller to collect, process, and store user data (e.g. the mentioned FitBit) they shall be required to enable you to take hold of this data and use it with another provider. I argue that the electronic processing system is not necessarily your device (the tracker itself) but the external data service this provider offers. This (to me) also implies that your right to get this data in a structured and commonly used electronic format does not require the first provider to give you the data in the file format of the second provider if their format is proprietary and not commonly used. From a technical standpoint we would expect a common API to allow for this interchange of data but I dare say that we will have to wait for this for some time and quite a number of disputed court decisions.

\n\n
\n

In addition, how will I be expected to comply with this regulation if I design my own IoT device that synchronises with the Internet?

\n
\n\n

If it is DIY and only you use it you'll most likely be fine. If you start turning this into a business plenty of regulations including this one shall hit you.

\n" }, { "Id": "288", "CreationDate": "2016-12-11T18:15:08.787", "Body": "

I've been looking into how to set up Ubuntu Core (the IoT version of Ubuntu) on a Raspberry Pi, and I've read about gadget snaps, which the documentation says is intended to define the device features:

\n\n
\n

The gadget snap is responsible for defining and manipulating the system properties which are specific to one or more devices that will usually look similar to one another from an implementation perspective.

\n
\n\n

What's the motivation for having a gadget snap? Can the information not be obtained through normal system features such as lshw, or is there another reason for this (perhaps security or a more declarative environment)?

\n\n

The reason I ask is because if I wished to use Ubuntu Core on a different device, the pre-defined gadget snaps won't be suitable, but I'm unsure why I even need a gadget snap in the first place.

\n", "Title": "What's the purpose of a gadget snap in Ubuntu Core?", "Tags": "|ubuntu-core|snappy|", "Answer": "

According to developer.ubuntu.com, there are basically two purposes:

\n

- Declare hardware capabilities to the system

\n

Quoting from Ubuntu Documentation:

\n
\n

The gadget snap is responsible for defining and manipulating the system properties which are specific to one or more devices that will usually look similar to one another from an implementation perspective. This snap must necessarily be produced and signed by the device brand, which is defined via the model assertion. The brand knows where and how that device will be used, and designs the gadget snap accordingly.

\n

For example, the brand may know that the device is actually a special VM to be used on a particular cloud, or it may know that it is going to be manufactured in a particular factory. The gadget snap may encode the mechanisms for device initialization - key generation and identity certification - as well as particular processes for the lifecycle of the device, such as factory resets. It is perfectly possible for different models to share a gadget snap.

\n
\n

- Pre-allow access to snaps that need to use this hardware

\n

Also from the Ubuntu Documentation:

\n
\n

The optional prepare-device hook is a script that will be called on the gadget at the start of the device initialization process, after the gadget snap has been installed. The hook will also be called if this process is retried later from scratch in case of initialization failures.

\n

The device initialization process is for example responsible of setting the serial identification of the device through an exchange with a device service. The prepare-device hook can for example redirect this exchange and dynamically set options relevant to it.

\n
\n" }, { "Id": "294", "CreationDate": "2016-12-12T09:07:24.593", "Body": "

In my understanding, in MQTT a topic is created once a client publishes something with the corresponding topic name.

\n\n
\n

There is no need to configure a topic, publishing on it is enough.

\n
\n\n

From here.

\n\n

It is possible for a client to subscribe to its own topic, after it published, created it? I could not find any restrictions on it in the specifications. It is not listed as possible abnormal behaviour neither:

\n\n
\n

5.4.8 Detecting abnormal behaviors

\n \n

Server implementations might monitor Client behavior to detect potential security incidents. For example:

\n \n \n
\n\n

Based on this, I think it is certainly possible. So I am interested in what are the uses cases of this feature?

\n\n

Why does the standard allow such mechanism, would it be to complicated to track the owner of the topics? So instead it just simply allows clients to subscribe to their own topic.

\n\n

One use case I can think of is that this way a client can verify its published data.

\n", "Title": "Can an MQTT client subscribe to a topic created by itself?", "Tags": "|mqtt|", "Answer": "

Yes.

\n\n

MQTT clients are connected to a broker which can be a cloud or some other device.

\n\n

There is no such thing like creation of topic. They are used as a heading for a message. So if your client has subscribed to a topic and if it publish's something on that topic then the message will be received via the broker to the client again.

\n\n

Examples of good brokers are Mosquitto for running on devices and CloudMqtt for cloud based Broker

\n" }, { "Id": "298", "CreationDate": "2016-12-12T18:41:55.573", "Body": "

MQTT allows senders to set a Quality of Service (QoS) level, which provides certain guarantees about whether a message will be received (and whether duplicates are permitted). This article from HiveMQ highlights the problem of downgrading, where a client with a lower QoS level will not receive the message with the guarantees that the sender requested:

\n\n
\n

As already said, the QoS flows between a publishing and subscribing client are two different things as well as the QoS can be different. That means the QoS level can be different from client A, who publishes a message, and client B, who receives the published message. Between the sender and the broker the QoS is defined by the sender. When the broker sends out the message to all subscribers, the QoS of the subscription from client B is used.

\n
\n\n

Does MQTT provide a way of indicating that this downgrade is not acceptable, and that the message must be delivered using the original sender's requested QoS? Is the only option to make sure that both the sender and the receiver have the desired QoS setting before transmitting the message?

\n", "Title": "Is there a way to preserve the MQTT QoS level until it reaches the client?", "Tags": "|mqtt|communication|", "Answer": "

One thing to remember when working with MQTT is that \"both subscribers and publishers are considered MQTT clients\".

\n\n

As said QoS set while publishing is entirely related to broker(B) not the other clients. So to ensure that subscriber(S) is receiving everything that publisher(P) is publishing, one need to use QoS 1.

\n\n

Let's look at cases:\nP - sends with QoS 0 which means that every message will be at B atmost once ( one time or zero ).\nIn this case if S subscribes to B with QoS 0 -- there is no guarantee that even when the broker (B) receives a message that is going to finally reach S . \nQoS 1 -- S will definitely receive\nQoS 2 -- S won't receive multiple messages while broker can

\n\n

If we do the same with other QoS. We will get to know QoS1 for subscribers works well with all levels as QoS 1 is superset of all.

\n\n
\n\n

MQTT does not provide any indication for the same but we can achieve sender's QoS by using QoS 1 while subscribing.

\n" }, { "Id": "300", "CreationDate": "2016-12-12T22:09:41.117", "Body": "

I've just got the Google Home, and SmartThings hub. My TV is already controlled with Logitech Harmony Hub (Smart Home Hub).

\n\n

How do I set it up so that saying \"Ok Google, Watch TV\" would trigger the harmony activity to watch TV?

\n\n\n", "Title": "How to connect Google Home to Harmony hub through SmartThings", "Tags": "|google-home|samsung-smartthings|logitech-harmony|", "Answer": "

The Harmony Hub creates virtual switches in the SmartThings app. If you authorize these with Google home, you can issue commands like \"Turn on Watch TV\" or \"Turn off Watch TV\"

\n\n

The Google home integration can be found under Things -> Voice Control in the SmartThing's Marketplace.

\n" }, { "Id": "301", "CreationDate": "2016-12-12T22:12:08.497", "Body": "

Everytime I ask Google Home to play a YouTube video, it searches for (seemingly) random playlists that may contain that video... sometimes it doesn't.

\n\n

I have my own playlists, with a specific video.

\n\n

How do I instruct Google Home to play a video from my YouTube playlist?

\n\n

Edit: Requests I've tried:

\n\n

This is the video I want to play. Please note on the web, simply typing \"Frozen Let It Go\" brings up this video 100% of the times (even in incognito)

\n\n\n\n

Most of the times it brings the Demi Lovato cover
\nOther times it loads playlists (again, either with Demi Lovato cover, or others)

\n\n

I have a playlist called \"Frozen Sing Along\", but I am unable how figure out how to tell Google Home to search within my playlists.

\n\n

I've tried \"Play Frozen Sing Along on TV\", but again, it picks up random playlists, not mine.

\n", "Title": "How to instruct Google Home to play YouTube videos from *my* playlists", "Tags": "|google-home|chromecast|", "Answer": "

I've tested this out extensively as I really want to be able to verbally initiate one of my Youtube playlist via Google Home.

\n\n

The good news is that can, but only for your Liked videos. I use a Routine to initiate this (because it gives you flexibility that specify a preferred verbal command. Here's how I have set it up in Google Home app on my Android mobile

\n\n
    \n
  1. Open Home app and go to Account/Settings/Assistant/Routines
  2. \n
  3. Select 'Add action' for the 'When I say' action and type \"My Videos\" (or whatever you prefer to say to initiate the routine)
  4. \n
  5. Select 'Music' option for the 'My Assistant Should' action, and then select the right hand cog (settings) icon.
  6. \n
  7. Enter the following text [your Youtube user name] \"Liked videos where it ask 'What music would you like to play?. Just to clarify, here's the exact text I have entered which is between these quotes \"Sydney_Robster Liked videos\"\n5 Use back button to return to the previous page and select the tick at the top of the page to save this Routine.
  8. \n
  9. Initiate your Youtube Liked Videos Routine my saying to Google Home/Assistant, \"Hey Google, My Videos\" (or whatever command you defined). The routine will play your Liked Videos in the order that they have been created in Youtube. If you want to randomise it, then additionally say \"Hey Google, shuffle\"
  10. \n
\n\n

Another useful command for the Google Home/Assistant is when you are watching Youtube cast to chrome device, is if you see a new video you like, say \"Hey Google, like\". The Assistant will reply \"I have noted that you like this\" and the Assistant will 'like' the video and it will be appended to your list.

\n" }, { "Id": "303", "CreationDate": "2016-12-12T23:29:34.063", "Body": "

I have various lights/switches/dimmers on a SmartThings hub. I can control them individually, but would like to say something like \"Alexa turn on everything\" to turn on all lights, or maybe \"turn on movie time\" to dim the lights turn off the kitchen lights, etc.

\n\n

Hue has 'scenes' is there something similar for SmartThings?

\n", "Title": "How can I use Alexa to turn on/off multiple lights at the same time with SmartThings hub?", "Tags": "|alexa|samsung-smartthings|", "Answer": "

I ma using Fibaro Automation with Automation Bridge with Alexa. I can just say

\n\n

Thats just built in.

\n

For further refinement, I have created groups so I can say things like

\n\n

I am not associated with any of the 3 companies. But I would recommend Automation Bridge for anyone that is serious about full control. It interfaces to a range of Automation servers etc and integrates with Siri, Alexa and Google.

\n" }, { "Id": "306", "CreationDate": "2016-12-13T05:24:14.113", "Body": "

I have several lights connected to relays which are connected to a wiolink

\n\n

I can turn the lights on and off through the REST API, like so:

\n\n
curl https://us.wio.seeed.io/v1/node/GroveRelayD0/onoff/[onoff]?access_token=xxxxx\n
\n\n

How can I access this REST API through Alexa with an Echo Dot?

\n", "Title": "How do I configure Alexa to access a REST API?", "Tags": "|alexa|amazon-echo|voice-recognition|", "Answer": "

See these instructions.

\n\n

Create an AWS developer account & AWS account.

\n\n

In the AWS console

\n\n\n\n

Here is a python script. Change modify_state to be either 1 or 0

\n\n
import urllib2\n\ndef modify_state( port, state, token):\n    url = 'https://us.wio.seeed.io/v1/node/%s/onoff/%s?access_token=%s' % (port, state, token)\n    req = urllib2.Request(url,'')\n    response = urllib2.urlopen(req)\n\ndef lambda_handler(event, context):\n    modify_state('GroveRelayD0', <STATE:0:1>, '<APIKEY')\n    # TODO implement\n    return {\n        'version': '1.0',\n        'sessionAttributes': {},\n        'response': {\n            'outputSpeech': {\n                'type': 'PlainText',\n                'text': '<whatever whitty remark alexa should say>'\n            },\n            'card': {\n                'type': 'Simple',\n                'title': \"SessionSpeechlet - foo\",\n                'content': \"SessionSpeechlet - bar\" \n            },\n            'reprompt': {\n                'outputSpeech': {\n                    'type': 'PlainText',\n                    'text': 'I know right'\n                }\n            },\n            'shouldEndSession': True\n        }\n    }\n
\n\n\n\n

\"aws

\n\n\n\n

\"aws

\n\n
\n\n

In the developer console

\n\n\n\n

\"endpoint

\n\n

You can skip the last 2 steps. The skill will run in development mode and only you will be able to access it. Complete the last 2 steps only if you want to share your skill with anyone in the world.

\n" }, { "Id": "318", "CreationDate": "2016-12-13T17:52:50.617", "Body": "

I've been considering Mosquitto for a MQTT message broker for a home IoT network, but I'm concerned that the broker could be a single point of failure which could bring down my whole network if it failed, since all messages have to go through the broker and no messages can be transmitted at all if the broker goes offline for any reason (e.g. accidental unplugging, hardware failure, etc.)

\n\n

Would be possible to use multiple brokers with Mosquitto installed to improve the reliability of the network? If it is possible, are there any disadvantages/significant overheads to using multiple brokers?

\n", "Title": "Can Mosquitto support multiple brokers?", "Tags": "|networking|mqtt|mosquitto|", "Answer": "

Yes, Mosquitto does support multiple brokers.

\n\n

Mosquitto uses MQTT Bridges to connect multiple brokers thus routing messages between these mosquitto brokers. This way a bridge between your primary broker to a fallback system can be established. Avoid creating loops though. If both brokers run your clients publish to the primary broker which then publishes the topic to each and any subscriber including the bridged secondary broker. If the primary fails your clients will note (Connection Refused, Server unavailable) and can fallback to directly publish to the secondary. (I am not yet sure how to fix it the other way round.) As you're not expecting the client to disconnect ungracefully I think that \"Last Will and Testament\" do not apply here (it would be used to have the broker notify on behalf of a disconnected client).

\n\n

This post however lists the drawbacks of this approach especially with respect to scalability and availability:

\n\n
\n \n
\n" }, { "Id": "323", "CreationDate": "2016-12-14T07:51:18.677", "Body": "

I am currently using fauxmo to send custom commands to various devices to turn them on/off (For instance I have a WiFi to IR converter to control my Tuner & TV) and I can turn the tv on/off with this.

\n\n

I also have Kodi integration setup, so I can say:

\n\n
Alexa, Ask Kodi to set volume to 50%\n
\n\n

But I'd like to be able to say:

\n\n
Alexa, Set tuner volume to 50%\nAlexa, Play Bluray\n
\n\n

I.e. I want to be able to control devices without needing an Ask xxxx as part of my request.

\n\n

Ideally I want to do this without a cloud-based service (i.e. SmartThings or Wink).

\n\n

I like the solution used by fauxmo (emulate an existing UPNP service), but it is limited to on/off (and 'dim' if you use the Hue enabled patch) and not really flexible enough.

\n", "Title": "How to write custom Alexa Skills without 'Ask xxxx'", "Tags": "|smart-home|alexa|", "Answer": "

You should be able to do this now with Alexa's name free interaction.

\n\n
\n

To make your skill more discoverable for name-free interaction, you can implement the the CanFulfillIntentRequest interface in your skill

\n
\n" }, { "Id": "327", "CreationDate": "2016-12-14T16:06:03.037", "Body": "

I've recently read about neural networks in constrained environments (in particular, A Neural Network Implementation on an Inexpensive Eight Bit Microcontroller) and their applications to IoT devices (e.g. regression for predicting things based on sensor inputs, etc).

\n\n

This seems ideal for simple applications where processing is not time-critical, and the data to process will be relatively infrequent. However, further research suggests that training a neural network in a resource-constrained environment is a poor idea (see the answer to Is it possible to run a neural network on a microcontroller).

\n\n

Does this still apply for Cotton, Wilamowski and D\u00fcndar's approach that I linked? Would it be necessary to train a network designed for low resource usage on a more powerful device in my IoT network?

\n\n

For context, if I had a sensor transmitting the heat setting, I am considering a neural network as described in the paper to predict the desired boiler setting based on that and the time of day, etc. Training would be useful to change the neural network's outputs based on more data provided by the user. This Quora question describes a similar scenario well, and discusses the implementation details for a neural network, but my question is more focused on whether running the network on the actuator itself would work.

\n", "Title": "Is it possible to run and train a neural network on an 8-bit microcontroller?", "Tags": "|smart-home|microcontrollers|machine-learning|", "Answer": "

According the first paper, running is not a problem. That was the purpose. Only there is a limitation on the maximum weights:

\n\n
\n

Currently the limitation on the architecture embedded in this microcontroller is limited only by the number of weights needed. The neural network is currently limited to 256 weights. However for most embedded applications this 256 weight should not limit the system.

\n
\n\n
\n\n

As for training, as far as I understand the implementation described, the PIC controller receives parameters from an external source.

\n\n
\n

The neural network forward calculations are written so that each neuron is calculated individually in a series of nested loops. The number of calculations for each loop and values for each node are all stored in a simple array in memory.

\n \n

[...]

\n \n

These arrays contain the architecture and the weights of the network. Currently, for demonstration purposes, these arrays are preloaded at the time the chip is programmed, but in the final version this would not be necessary. The microcontroller could easily be modified to contain a simple boot loader that makes use of the onboard RS232 serial port which would receive the data for the weights and topography from a remote location. This would allow for the weights or even the entire network \n to be modified while the chip is in the field.

\n
\n\n

I suspect that the training is performed externally as well.

\n\n

The paper also gives references for Neural Network Trainers which were probably used to determine the values preprogrammed into the PIC's memory.

\n\n\n\n

Now, I have looked into the first one which describes network architectures\nand algorithms to use with them. But the Neural Network Trainer software used here is implemented in MATLAB.

\n\n
\n

Currently, there is very little neural network training software available\n that will train fully connected networks. Thus a package with a graphical user interface has been developed in MATLAB for that purpose. This software\n allows the user to easily enter very complex architectures as well as initial\n weights, training parameters, data sets, and the choice of several powerful\n algorithms.

\n
\n\n

I have to mention that the fully connected networks has lower weights number for a same task than a layer by layer architecture. That makes it more suitable for microcontrollers.

\n\n

I am not a neural network expert and it is quite complex so I can be wrong, but based on these papers I would say that Cotton, Wilamowski and D\u00fcndar's approach requires an external, more powerful platform to perform the training.

\n\n
\n\n

About running a neural network on a microcontroller, ST Microelectronics just announced a toolkit STM32Cube.AI: Convert Neural Networks into Optimized Code for STM32 to convert pre-trained neural networks from popular libraries to most of their STM32 MCUs.

\n" }, { "Id": "329", "CreationDate": "2016-12-14T20:02:23.297", "Body": "

According to MakeUseOf.com, the Lively Medical Alert Watch...

\n\n
\n

... allows remote health monitoring of your loved ones. The smartwatch can track steps taken as well as other kinds of daily activities, plus it provides an emergency assist button that alerts Lively to call in and check that everything is alright.

\n
\n\n

What does it actually monitor? Specifically I would like to know, does it check the vulnerable person's pulse or neural activity, or is it just checking for various motions/actions throughout the day?

\n", "Title": "What does the Lively Medical Alert Watch Monitor?", "Tags": "|sensors|monitoring|lively-medical-alert|", "Answer": "

What does it actually monitor?

\n\n

From the company's web site:

\n\n
\n

Lively pillbox activity sensors monitor daily medication activity and\n create an alert whenever anything is missed

\n
\n\n

They also have a web based dashboard, which relatives or carers can check.

\n\n
\n

Lively's safety watch features a pedometer to keep track of steps throughout\n the day.

\n
\n\n


\n\n
\n

Coming soon!
\n Clip for auto fall detection

\n
\n\n


\n\n
\n

Turn on/off vibration for reminders, alerts\n Turn on/off medication reminders

\n
\n\n

From a USA Today review

\n\n
\n

When you're out of range, the watch reminds the person to call 911

\n
\n\n

Specifically I would like to know, does it check the vulnerable\nperson's pulse or neural activity, or is it just checking for various\nmotions/actions throughout the day?

\n\n
\n

How do we measure daily activity patterns?

\n \n \n \n

Attach a sensor to any pillbox to keep track of when medication is taken.

\n \n \n \n

Attach a sensor to the refrigerator and other kitchen objects to infer \n when food is prepared or consumed.

\n \n \n \n

Attach a sensor to a movable object that is part of the daily routine\n patterns of older adults to log more detail (e.g., bathroom door \n or favorite chair) in order to log more detail.

\n
\n\n

To answer your question:

\n\n

from the USA Today article:

\n\n
\n

It doesn't monitor sleep or measure heart rate or other vitals.
\n The step counter isn't the most accurate. What is being measured is \n the movement of the watch, so my steps tally climbed even when I lay in bed.

\n
\n\n

Also, Googling for +\"Lively Medical Alert Watch\" +pulse returned zero hits.

\n\n

When the company says

\n\n
\n

infer when food is prepared or consumed

\n
\n\n

it doesn't fill me with confidence.

\n\n

I also quibble with \"keep track of when medication is taken\", and say that they can infer that pills have been taken form a pill box, but not that they have been ingested.

\n\n

Does this answer your question? It took me 5 minutes Googling, although, in your place, I would have emailed the manufacturer directly.

\n" }, { "Id": "332", "CreationDate": "2016-12-14T23:40:40.147", "Body": "

So I installed mosquitto and mosquitto-client on a Raspberry Pi running Raspbian Jessie through apt-get as well as mosquitto on another Pi running Arch Linux through pacman. On Arch the client utils do not need to be installed separately.

\n\n

Testing simple subscription/publishing on Raspbian worked out of the box.

\n\n
mosquitto_sub \u2013d \u2013t blub\nmosquitto_pub \u2013d \u2013t blub \u2013m \u201ctest\u201d\n
\n\n

Publishing from the Arch box works as well:

\n\n
mosquitto_pub -h <IP-Raspbian> -t blub -m \"test\"\n
\n\n

Subscribing a topic on the Arch system however gets me just:

\n\n
mosquitto_sub \u2013d \u2013t blub\nError: Connection refused\n
\n\n

Now that is pretty generic. What's wrong here?

\n", "Title": "mosquitto_sub \"connection refused\" on Arch Linux", "Tags": "|mqtt|raspberry-pi|mosquitto|linux|", "Answer": "

Turns out there is no broker running on the Arch system whereas installing mosquitto on Raspbian automatically starts it. Simply enable and start the broker.

\n\n

Start the systemd service.

\n\n
systemctl start mosquitto\n
\n\n

Enable the systemd service to run on boot.

\n\n
systemctl enable mosquitto\n
\n" }, { "Id": "334", "CreationDate": "2016-12-15T01:50:27.427", "Body": "

HiveMQ's blog lists under \"best practices\" not to subscribe to the multi level wildcard when attempting to dump all messages to a database. They claim that the subscribing client may not be able to keep up with a high load of messages and propose to use a broker plugin to directly hook into the stream of messages instead.

\n\n
\n

Sometimes it is necessary to subscribe to all messages, which are transferred over the broker, for example when persisting all of them into a database. This should not be done by using a MQTT client and subscribing to the multi level wildcard. The reason is that often the subscribing client is not able to process the load of messages that is coming its way. Especially if you have a massive throughput. Our recommended solution is to implement an extension in the MQTT broker, for example the plugin system of HiveMQ allows you to hook into the behavior of HiveMQ and add a asynchronous routine to process each incoming message and persist it to a database.

\n
\n\n

Is there either

\n\n\n\n
\n\n

https://stackoverflow.com/q/31584613/3984613 does not address this question exhaustively.

\n", "Title": "Don\u2019t subscribe to # - so how to dump all messages to database with Mosquitto?", "Tags": "|mqtt|mosquitto|", "Answer": "

I think it is important to consider that there are many different use cases for MQTT brokers, as with any piece of software.

\n\n

Handling chat messages for a billion users (many users, relatively low message rate per user) is different to a system with few clients but a high message rate, and they are both different to a home automation system (few clients, low message rate).

\n\n

HiveMQ are thinking about the very high client/message rate applications - in which case the capability of the broker almost certainly far exceeds that of a client.

\n\n

If you want to subscribe to # in your home automation system then it's really unlikely to cause problems. You can check and see if the broker is using excessive CPU in any case.

\n\n

As in the other answers, subscribing to # will give you all 'normal' topics, that is anything that doesn't start with a $. I interpret the spec as saying that each topic beginning with $ is a whole separate tree in itself, so you'd have to subscribe to $SYS/#, $whatever/# to get everything. You most likely don't want to do that anyway for a normal application.

\n" }, { "Id": "347", "CreationDate": "2016-12-15T17:20:02.280", "Body": "

The situation is the following:

\n

There is a client, a publisher, it is not subscribed to any topic. This client has a single topic of its own, and publishes data reguraly to it. But there are not any other clients that are subscribed to this topic.

\n

So this poor and lonely client might be considered as abnormal (Chapter 5.4.8).

\n
\n

Server implementations might monitor Client behavior to detect potential security incidents. For example:

\n\n
\n

It has no idea about how many clients have subscribed to its topic. So it does not know that it might be considered a client with abnormal behavior.

\n

So what happens with such a client will depend on the server implementation? What are the practices, it should be simply disconnected, but won't it try to reconnect then?

\n", "Title": "What happens when there is not any subscriber to a topic in MQTT?", "Tags": "|mqtt|", "Answer": "

As you say, it depends on the server implementation, especially the QoS of the transmitted message if it is \"at least once\".

\n\n

IMHO MQTT is a broadcasting system, not a end-to-end protocol between two machines so we don't absolutely need a subscriber every time we create a subject.

\n\n

I can post anything (temperature,...) and two months later implement something that will read it, or even remove it and think of something else while my sensor still publishes data.

\n" }, { "Id": "348", "CreationDate": "2016-12-15T19:15:28.667", "Body": "

Protocols that are modelled on the publish-subscribe pattern such as MQTT and AMQP require a centralised message broker to co-ordinate messages being sent and received. This does not pose much of a problem when your IoT network is based on a star topology, where all messages have to go through a central hub anyway, however I've been thinking about the benefits of mesh networks and how these may be affected by protocol choice.

\n

The Thread Introduction presentation outlines several benefits of Thread's mesh network in particular (however these should apply generally):

\n
\n

\u2714 No single point of failure

\n

\u2714 Self-healing

\n

\u2714 Interference robustness

\n

\u2714 Self-extending

\n

\u2714 Reliable enough for critical infrastructure

\n
\n

Although I can't imagine the latter four points being affected by protocol choice, I'm curious as to whether using a message-brokered protocol would cancel out any advantages of the mesh network's "no single point of failure".

\n

Does using a publish-subscribe based protocol introduce an inevitable single point of failure in general, and is this why the Thread Introduction presentation suggests CoAP instead as a potential protocol to use?

\n
\n

I've already asked about Mosquitto supporting multiple brokers to remove the single point of failure, but I'm asking this to question whether this is a fundamental conflict between mesh networks and publish-subscribe protocols.

\n", "Title": "Do protocols based on the publish-subscribe pattern negate the benefits of mesh networks?", "Tags": "|publish-subscriber|mesh-networks|thread|", "Answer": "

Yes and no.

\n\n

Both technologies are concerning different levels of providing connectivity. Usually mesh networking is provided by level 3 or 4 or even both of the ISO OSI model, depending on the extend of implementation. The network and transport layers provide the basic reliability of the mesh network. That reliability is usually not impeded when a node drops off.

\n\n

MQTT and AMQP are application layer protocols on level 7. Therefore these protocols are dependent on the reliability of the lower levels as far as the basic model goes. However it's always the prerogative of higher OSI levels to implement safeguards to cope with failures of the lower levels. For example the application can switch to a completely different network, like from Wi-Fi to 4G for example if it detects network failure. Smartphones do it all the time when we enter or leave a place with a configured Wi-Fi.

\n\n

There are also possibilities for the lower levels to accommodate for failures in upper levels. OSI level 4 load balancing for example can accommodate for failing nodes behind it. Of course, that requires that each nodes that can be addressed for load balancing and or failover solutions can provide the same service. Also obviously you need the central component at least twice. Since MQTT is basically application level routing based on topics that should be possible by simple duplication. This is an example of an MQTT cluster solution with the HiveMQ implementation.

\n\n

With that in mind it can be concluded that, no, the reliability on the network and transport levels cannot be negated by the choice of any higher level protocols. However that does not apply to the user experience. For the user the lower level protocols are just vehicles. Using an application layer protocol that has a single point of failure still means, that if that node is broken, then yes, the functionality is broken even though my network is still working.

\n\n

Anyhow, the application layer and above are responsible to provide the reliability to the user. Mesh networks can only deliver the basics.

\n\n

There is one final thing to consider. Unless one has redundancies for every component, there are always use cases that have single points of failure. It's most likely the node the user actually interacts with. In home automation for example every failing node very likely means that one just lost a use case.

\n" }, { "Id": "351", "CreationDate": "2016-12-16T05:30:25.000", "Body": "

We are making a smart home safety system for a small academic project (Yes. I am a newbie for this). We need a Gas Sensor (LP Gas) for the project, but there are a few questions that I have.

\n\n

I found the MQ-6 Sensor that can be used for sensing the LP gas leakage. But the problem is, how far can it go ?

\n\n

I live in a tropical country, where there is always high temperature and humidity.

\n\n
    \n
  1. How can these weather conditions effect the sensor ?
  2. \n
  3. What happens if we keep it in a large room ? Will it still be able to sense the gas ?
  4. \n
  5. How much closer do I have to place the sensor to the cylinder ?
  6. \n
\n\n

Any ideas will be helpful. Thanks in advance.

\n", "Title": "Using LP Gas Sensor in a Smart Home Safety System", "Tags": "|smart-home|safety|sensors|", "Answer": "

Another area to consider is corrosion in electrical signal paths, particularly in tropical climates close to the ocean. If the metal electrical connections are expose to the elements the conductive medium starts deteriorating cause a change in electrical signal. Therefore good mechanical packaging might be something to consider.

\n\n

Alternatively, in Tropical climates Liquid Petroleum Gas (LPG) is mostly used for cooking. Also the LPG cylinder tends to be placed close to the stove. So if you can monitor use of the stove in relationship to LPG cylinder pressure, it might be possible to monitor and notify if there is a LP gas leak using something like IFTTT. Example if stove not in use and LP Gas pressure is dropping then there might be a leak and alert interested parties.

\n\n

Deviating outside the scope of the question, additional services can be created to notify users when LP gas is running low or even monitor usage patterns to help manage LP Gas usage. It is common in under develop countries to carry a secondary small backup gas cylinders to supplement until a availability of a refill. Providing a monitoring capabilities can help eliminate the secondary LP gas cylinder.

\n\n

References:

\n\n\n" }, { "Id": "357", "CreationDate": "2016-12-16T17:05:08.647", "Body": "

Several news sources such as Intellihub and CEPro seem to suggest that Amazon's Echo home assistant constantly listens to conversations and sends them via the Internet to Amazon's servers. CEPro states that:

\n
\n

By saying a key phrase Amazon calls a \u201cwake word\u201d the Echo comes to life and begins listening for commands. By default, the wake word is Alexa.

\n

If you reread that last sentence it may not make sense, especially if you are in the security field. According to Amazon, the Echo only listens for commands once it hears its wake word. How does it know when you have said the wake word if it wasn\u2019t already listening?

\n
\n

Intellihub's article is similar in its sentiment:

\n
\n

The \u201cAmazon Echo\u201d device, a constantly-listening Bluetooth speaker that connects to music streaming services like Pandora and Spotify at the sound of a person\u2019s voice, can be easily hacked and used by government agencies like the FBI to listen in on conversations.

\n
\n

(Note that I'm not particularly focused on exploring the hacking aspect of this question, since that would probably be too much for one question. My main focus is the always-on aspect and whether this sends data all the time.)

\n

Neither article seems particularly keen to disclose a source for its claims, which suggests to me that they are unproven at best, or clickbait at worst.

\n

Is the Echo always recording and sending data to the cloud, or are the above claims unsubstantiated? How does the Amazon Echo process data if it's not always sending data to servers in the cloud?

\n", "Title": "Is the Amazon Echo 'always listening' and sending data to the cloud?", "Tags": "|amazon-echo|privacy|", "Answer": "
\n
\n

By saying a key phrase Amazon calls a \u201cwake word\u201d the Echo comes to life and begins listening for commands. By default, the wake word is Alexa.

\n \n

If you reread that last sentence it may not make sense, especially if you are in the security field. According to Amazon, the Echo only listens for commands once it hears its wake word. How does it know when you have said the wake word if it wasn\u2019t already listening?

\n
\n
\n\n

Echo listens actively for the keyword and takes the words spoken after keyword for NLU processing. Here is my understanding how echo achieves this neat feat.

\n\n

Echo is built on Texas Instruments DM3725 Digital Media Processor.

\n\n

This TI SoC has two key pieces inside, first is ARM Cortex-A8 MPU, and the second one is TMS320DM64x+ DSP. The ARM core should be running Linux and the DSP is running firmware.

\n\n

When idling, the ARM core is taken to lowest possible power state and Linux is completely suspended. At this time the DSP and 64KB On-Chip RAM are active. The DSP firmware processes noise coming in from the mics and attempts to identify if a keyword (e.g., Alexa) is spoken. As soon as it identifies there's a keyword, DSP sends an interrupt to wake up the ARM core which in turn resumes Linux. But, remember, while Linux is waking up the human who said Alexa would have continued speaking (as in, \u201cAlexa, what time is it?\u201c). The DSP buffers the \"what time is it?\" part on the on chip RAM. And when Linux is resumed Linux fetches the buffered speech and uses Natural Language Processing (partly local, partly cloud) capability to understand what Human said.

\n\n

As you see the design is totally created to be least power hungry one and to avoid need of including cloud for keyword detection and initial buffering. As a matter of fact keeping the ARM core at lowest powers state ensures that the silicon heats the least when idling thus in a way bringing long life to your device.

\n\n

I am leaving out discussion of attempts to hack echo as the question was following:

\n\n
\n

the wake word recognition is indeed done locally.

\n
\n" }, { "Id": "360", "CreationDate": "2016-12-17T10:04:23.207", "Body": "

Assuming I cannot use wireless technologies such as LoRa, LTE-M or SigFox in the environment for the install, I must use a wired sensor protocol to communicate with the gateway installed remotely in a building.

\n\n

The cable runs can be up to 20m from the gateway and ideally I will be purchasing a reliable solution that is not overly expensive. It would be excellent if the sensor was CE compliant but there doesn't seem to be a low cost option (<\u00a320), e.g. solar.

\n\n

So, the requirements are:

\n\n\n\n

There are many options such as SPI, I2C, RS485, Onewire and CAN. The protocol we select will determine the sensor we select.

\n\n

The Onewire protocol from Maxim (Dallas) looks ideal but there are as yet a limited range of sensors, (in saying this we could use the maxim bridge)

\n\n

What is the best wired protocol for the requirements listed above?

\n", "Title": "Wired sensor protocol for buildings monitoring sensors?", "Tags": "|protocols|sensors|wired|", "Answer": "

I would either go with RS485 or CAN because with long buses a lot of noise can be picked up. These are the most noise resistant as both of them use differential, twisted data lines.

\n

RS485 supports distances up to ~1,200 meters with a guaranteed speed of 100 Kb/s. Max 10 Mb/s with smaller distances. It is a multi point bus with up to 32 drivers and 32 receivers. (One active driver at a time.)

\n

CAN is usable over 20 meters as well. From this Controller Area Network Physical Layer Requirements.

\n
\n

\"cable

\n
\n

To repeat my comment, I2C is out of question because of the long distances. The bus capacitance would be too high. It is designed for short on-board distances.

\n

As for SPI here is another document about Extending the SPI bus for\nlong-distance communication but it might be complicated. So I would stay with CAN or RS485.

\n

Both of them are pretty common, so finding sensors would not be a problem IMO.

\n\n

There are sensors with CAN interface as well, but RS485 is more common, so maybe that would be the cheapest and the easiest.

\n" }, { "Id": "374", "CreationDate": "2016-12-17T19:45:47.387", "Body": "

I recently registered with IFTTT, which seems like a fantastic service to chain events together in order to create a smart home or automate various services.

\n

I've just found the Maker channel which allows you to make simple HTTP requests (e.g. GET and POST), and I'm hoping to use this to securely send a message to a Raspberry Pi I have running that is waiting for any API request on a certain route (let's say, for example, POST /foo).

\n

The Makezine article I linked suggests this method for security:

\n
\n

Now what I did above was horribly insecure, I basically exposed to the world a script \u2014 a web application in other words \u2014 that could toggle a switch controlling a light in my house on and off. This is obviously not something you want to do, but that\u2019s why IFTTT\u2019s services provides the capabilities to pass more information to the remote service.

\n

It wouldn\u2019t be difficult to set up a TOTP authenticated link between the two for instance, or a token or key exchange \u2014 and to protect your IFTTT account itself? They\u2019ve just added two-factor authentication.

\n
\n

I read more about Time-based One-time Passwords on Wikipedia, which seems to suggest that there is an element of computation involved in order to generate the one-time password.

\n

Since IFTTT does not support chaining of tasks or any scripting, how do I generate the TOTP as suggested in the article? Is it possible at all to do this, since some calculations are required and there doesn't seem to be a way to do these?

\n", "Title": "How do I generate a Time-based One-time Password with IFTTT?", "Tags": "|security|ifttt|one-time-password|", "Answer": "

The linked article is a little misleading. The interface provided by IFTTT is not completely open, it requires a key in the request. Since the request is made using HTTPS, the secret is not directly observable (provided your client always reliably connects to IFTTT, not a mitm proxy).

\n

From the maker channel information page (user specific)

\n
\n

To trigger an Event Make a POST or GET web request to:

\n
https://maker.ifttt.com/trigger/{event}/with/key/my-secret-key\n
\n

With an optional JSON body of:

\n
{ "value1" : "", "value2" : "", "value3" : "" }\n
\n

The data is completely optional, and you can also pass value1, value2,\nand value3 as query parameters or form variables. This content will be\npassed on to the Action in your Recipe.

\n

You can also try it with curl from a command line.

\n
curl -X POST https://maker.ifttt.com/trigger/{event}/with/key/my-secret-key\n
\n
\n

Now the key is only low entropy so could potentially be reversed from monitoring your requests (unless you pad them with high quality noise), but the request for per-session security is in this case satisfied by TLS which handles the setup of the HTTPS channel.

\n

To make the communication significantly more secure would require IFTTT to specifically support endpoint authentication, but this appears to exceed the security which is applied to the other service-side links. This means that your maker channel to IFTTT is currently equally secure as the IFTTT channel to your in-home appliances.

\n" }, { "Id": "379", "CreationDate": "2016-12-17T23:09:56.763", "Body": "

I am considering getting a Logitech Harmony Hub. It's my understanding that I can define activities which can entail using certain commands of the compatible devices. However, I am unable to find what kind of actions I can use within these activities and if these activities can contain several actions for the same device.

\n\n

Is it possible to automatically switch my TV, my home cinema system and Fire TV on, and set all of them to the proper input modes when they have booted? Is there a list somewhere of the things I can and cannot do with the Harmony Hub?

\n", "Title": "What can I actually do with a Harmony Hub?", "Tags": "|logitech-harmony|", "Answer": "
\n

Is it possible to automatically switch my TV, my home cinema system\n and FireTV on and set all of them to the proper input modes when they\n booted?

\n
\n\n

Yes, this is a standard activity. Here are two sample activities:

\n\n\n\n

There is some info here:\nUnderstanding Harmony Activities

\n\n

You can also bind different buttons to different commands for different activities.

\n" }, { "Id": "380", "CreationDate": "2016-12-18T01:46:59.720", "Body": "

I know that I can control a Nest thermostat from a Wink Hub (and app) by \"pushing\" commands to the thermostat (e.g. \"turn up the temperature\"). However can the Wink hub be notified when the Nest thermostat detects that I'm away? I want to use the away status as a trigger for a Wink Robot (e.g. \"When I'm away, turn off the lights.\")

\n", "Title": "Can the Wink Hub 2 \"listen\" for away status from the Nest Thermostat v3?", "Tags": "|nest-thermostat|wink-hub|", "Answer": "

Yes, apparently that's possible.

\n

In this blog some possibilities of connecting Nest with Wink robots are described.

\n
\n

The first Robot, I named \u201cSomeone\u2019s Home\u2019. I simply chose the robot to activate when Nest Away, detects someone is home. I set the time period to anytime. Then as a result, I set the robot to alert me via email.

\n
\n
\n
\n

My second Robot was not as diabolical. . When Nest detects we are away. Wink will wait 10 minutes, then lock the door!

\n
\n

Thus, the Wink Hub can react to that away state of Nest.

\n" }, { "Id": "387", "CreationDate": "2016-12-18T12:20:17.227", "Body": "

When using IFTTT, it's trivial to connect one trigger (if this) to one event/output (then that). However, I'm interested in using IFTTT for a slightly more complex query, along the lines of \"if this happens 3 times, do that\".

\n\n

A Quora question discusses this and suggests Numerous as a channel which can be used for more complex triggers, but it turns out that Numerous had to shut down due to lack of funding several months ago.

\n\n

A similar question was asked on Reddit with no satisfying answer, so I'm asking here in the hope that there is a better solution to my problem: How can I chain triggers in more complex queries with IFTTT? Is this even possible now that Numerous has shut down, or will I have to use an alternative service?

\n", "Title": "Can triggers be chained with IFTTT?", "Tags": "|ifttt|connector-services|", "Answer": "

IFTTT have now launched a maker platform which appears to support filters and chaining. I've not yet worked out if it has timers and cloud side variables/storage.

\n" }, { "Id": "395", "CreationDate": "2016-12-18T18:41:59.190", "Body": "

I've been reluctant to invest in many IoT devices, especially externally managed/subscription based devices because of issues having to deal with the closure of management services due to issues like planned obsolescence and corporate take-overs of parent companies, for example like what's happening with Pebble watches.

\n\n

I'm curious to know if there are any initiatives in active development (such as charters or legal frameworks) to devolve management rights of IoT devices or \"Opening\" the source code in the event of an end of product service.

\n\n

I've looked around a little bit on GitHub and the Free Software Foundation but haven't found anything like that. I'm wondering if there are any licenses or charters in development that imply the release of IoT source code once a service ends.

\n", "Title": "Are there any initiatives to prevent IoT obsolescence?", "Tags": "|standards|sustainability|open-source|", "Answer": "

A lot of people have struggled with this. No charter appears to be forthcoming, because the interests of the cloud providers lies in locking their hardware users in, based on the potential future profits to be made from subscribers. The more dependent you are upon their cloud, the more you'll be theoretically willing to pay for continued service (note that payment already includes non-monetary transactions such as targeted advertising and data collection.) So don't expect any of the cloud-based solutions to champion an exit strategy.

\n\n

So for now, you can overcome your reluctance by taking matters into your own hands. Your best defense is to purchase independent items that comply with standards (open or proprietary), as opposed to being run by proprietary clouds.

\n\n

Let's look at three examples: cloud-based, proprietary network, open network.

\n\n

Cloud-based

\n\n

AssureLink is a proprietary wireless network that uses a home hub to connect to a cloud-based service to provide remote access to one brand of garage door openers. When these openers were released on the market, a subscription to AssureLink cost $19.00 per year, and there were few sales. The company dropped charging for the service so they haven't gone away, but there is always concern that once they've saturated the market, the company will have no reason to continue the service, at which point the devices would become useless.

\n\n

This concern is not without merit.

\n\n

Pebble's recent acquisition by Fitbit has thrust them into the spotlight, but Revolv was the previous poster child for IoT devices being bricked by their parent company. The Revolv home hub connected to a proprietary cloud service. Google bought Nest, and then bought Revolv to get some of the people. They saw Revolv as splintering the home automation market away from Nest so they shut it down, bricking every Revolv home controller.

\n\n

Is this true for all cloud-based hardware? Apple, Google, and Amazon are all as stable as any provider can be. But Revolv sure looked good when Google first bought them; Nest is reportedly in disarray internally (and Google's new hub is pitted directly against Nest's philosophy), and you only have to look as far as Microsoft's Zune or Phone to see that being big doesn't mean that a centralized service will remain successful over the life of your devices.

\n\n

Proprietary P2P network standard

\n\n

Z-Wave is an example of a proprietary mesh network technology. Z-Wave devices can talk to each other, but each device requires a communications chip licensed by a single company. Development and per-chip licenses are ridiculously expensive, so the price of the devices will never come down until the patents expire, and even then only if someone else enters the market to compete with them. However, once you have a Z-Wave device, it will continue to work with new and old Z-Wave devices.

\n\n

Open P2P network

\n\n

WiFi based cameras are a good example of devices based on open network technology. Apps for smart phones can connect directly to the camera, no cloud required, no expensive licensed technology. However, these have drawbacks. In order to use these devices without a cloud, the installer has to configure their router/firewall to enable remote access. And as we've seen with the Mirai botnet, these devices are responsible for their own security but not all do a good job of it, and they even put the rest of your network at risk.

\n\n

The future with Open Source

\n\n

The open source answer to this are projects like OpenHab, Domoticz, mosquitto, and others. Instead of a proprietary cloud, you run the P2P devices from your own server. Only the server is exposed to the internet, and for the most part these are positioned to be better hardened. At this point this approach is still very much in its infancy, and all the solutions so far require some technical skills to set up and maintain a home network.

\n\n

However, by focusing on P2P devices today, whether they use an open or closed protocol, you are at least investing in building your own infrastructure that will work free of the dependency on proprietary external services. You can even get started with a proprietary home hub, as long as the devices are communicating via a P2P standard that exists beyond a single company. The open source solutions can't get any worse than they are right now, and are rapidly improving.

\n" }, { "Id": "400", "CreationDate": "2016-12-19T15:11:44.747", "Body": "

Is it currently possible to link a Nest thermostat with the Samsung SmartThings Hub?

\n\n

The Nest is not available in the list of thermostats to add, within the SmartThings App. However, is there at least a SmartApp that allow a bit more automation?

\n\n

If not, is there a newsletter or RSS feed to let me know when Nest might be supported?

\n", "Title": "SmartThings Hub and Nest Thermostat", "Tags": "|samsung-smartthings|nest-thermostat|", "Answer": "

According to SmartThings's Support Page:

\n\n
\n

SmartThings doesn\u2019t officially support Nest at this time, but many users instead utilize a custom integration created by a developer in the SmartThings Developer Community. This integration reportedly works well, and you can set it up through the SmartThings IDE by following the steps here.

\n
\n\n

In other words, it isn't officially supported, but it appears that it can work. Ifttt.com claims to have designed an applet which connects these two.

\n" }, { "Id": "405", "CreationDate": "2016-12-19T20:56:30.947", "Body": "

According to this blog, Mosquitto (the MQTT broker) now supports connecting to clients over web sockets. The blog article seems to hint that web sockets are more useful for browser applications, since web browsers don't support proper TCP sockets (yet), although the web socket protocol is supported by the majority of modern browsers.

\n\n

If I just have various clients in a network (e.g. sensors and actuators based on microcontrollers such as Raspberry Pis), will there be any advantage to using web sockets over direct TCP connections? Is the overhead of the web socket protocol only worth it when you are communicating with a browser?

\n", "Title": "Should I use Mosquitto's web sockets or connect clients directly?", "Tags": "|mqtt|mosquitto|web-sockets|", "Answer": "

The question here appears to be \"should I use MQTT over TCP, or use MQTT over websockets (which also goes over TCP)?\" In other words, is \"encapsulating MQTT in the websockets protocol a good idea?\"

\n\n

This is (almost) entirely down to your application and whether you need websockets support - probably for consuming messages in a browser or for firewall reasons. If you can't have your server be accessible on port 1883 or better 8883 for pure MQTT, then websockets may be your best option.

\n\n

Websockets does require extra bandwidth, but whether that is important to you is something only you can answer.

\n\n

It's also worth noting that in current versions of Mosquitto, websockets don't work as well as they could so there can be extra latency when sending/receiving websockets messages. That is something that will not be an issue in future versions though.

\n" }, { "Id": "411", "CreationDate": "2016-12-20T16:56:18.117", "Body": "

With the Amazon Echo, it's easy to set an alarm that will be triggered on the same device that the alarm was created, however it isn't always useful if you're not going to remain in the same room when you expect the alarm to fire (e.g. if you set an alarm in the kitchen to wake you up in the morning).

\n

The Amazon documentation seems to suggest that alarms are independent of each device:

\n
\n

Ask Alexa to set multiple countdown timers or alarms using your voice.

\n

Each Alexa device has its own timers and alarms. You can set the timer or alarm up to 24 hours ahead.

\n
\n

Can these alarms be synchronised or shared in some way, or will I have to go to the room where I want the alarm to be set?

\n", "Title": "Will alarms set on one Amazon Echo be shared with all other devices in the same home?", "Tags": "|alexa|amazon-echo|", "Answer": "

Apparently not. This blog discusses the current impossibility of the synchronized alarms. I couldn't find any information that suggests that they changed it by now. However as the blog discusses, it means that you can set one alarm and one timer per Echo, which multiplies your available timers and alarms. Alas, that's not your goal.

\n

I checked the usual suspects, but IFTTT does not offer the that-branch for Alexa. It also seems to be incapable of just playing your voice back and thus making the voice output of the one Alexa the input of the other one. However that would mean a delay for a timer, since timers are relative times. Moreover, this would only work with different wake words, since otherwise the Echos don't react both (cf. This blog). Only the nearest will respond. That also eliminates the possibility to just speak somewhere where all Echos can hear you.

\n
\n

If you have more than one device using the same wake word, Alexa responds intelligently from the Echo you're closest to with ESP (Echo Spatial Perception), and performs the requested task.

\n
\n

(Amazon Documentation, emphasis mine)

\n

The page also confirms that the alarms are device specific.

\n
\n

Some content is not common between devices on the same account, including:

\n\n
\n

The following solidifies even more that there is no off-the-shelf solution for the dual alarms.

\n
\n

Note: You cannot connect multiple Alexa devices to each other and play the same requested audio at the same time.

\n
\n
\n

Of course you can use the app to set the alarm from wherever you are and can set it for each Echo. Still, that means setting the alarm twice. Currently, I see no option to easily set the alarm automatically on both (or all) Echos.

\n

(Update: you can select which echo to set the alarm or timer on by selecting it in your voice command and do not need to use the app.)

\n

For example: "Alexa, set a timer to go off in an hour in the kitchen." Would effectively set the timer to only go off in the kitchen.

\n" }, { "Id": "431", "CreationDate": "2016-12-21T15:29:08.313", "Body": "

IFTTT has support for Amazon Echo (and the Alexa assistant) through the Amazon Alexa channel. Some channels allow you to specify variables, known as ingredients, from a trigger which will be passed into the output event.

\n

The Google Assistant channel supports recognising a phrase with a text ingredient (see the "Google Assistant triggers and actions" at the bottom of the page):

\n
\n

Say a phrase with a text ingredient

\n

This trigger fires when you say \u201cOk Google\u201d to the Google Assistant followed by a phrase like \u201cPost a tweet saying \u2018New high score.\u2019\u201d **Use the $ symbol to specify where you'll say the text ingredient

\n
\n

Alexa's trigger does not seem to mention anything about text ingredients:

\n
\n

Say a specific phrase

\n

This trigger fires every time you say "Alexa trigger" + the phrase that you have defined. For instance, if you set "party time" as the phrase, you can say "Alexa trigger party time" to have your lights loop colors. Please use lower-case only. Neither German characters (Umlaute/Eszet) nor their long-form equivalents (ae, oe, etc.) are currently supported \u2014 support is coming soon.

\n
\n

Does the Alexa channel support ingredients at all, or must I preprogram every possible input I want?

\n", "Title": "Does IFTTT's Alexa channel support 'text ingredients'?", "Tags": "|alexa|ifttt|", "Answer": "

No, it doesn't at the moment. The only this block that supports custom entries is the trigger block you cited yourself already. That function block does not support any special characters to be entered that would be necessary to define IFTTT variables.

\n\n

So yes, you do have to create a recipe manually for every phrase Alexa is supposed to react to via IFTTT. Hopefully they will add a more flexible this block as well. We can see on the Google channel that they have basically four different this blocks, \"simple phrase\", \"with number\", \"with text\" and \"with both\". The Alexa channel only supports simple phrases up until now. Hopefully they catch up.

\n" }, { "Id": "434", "CreationDate": "2016-12-22T10:08:33.430", "Body": "

I have recently read an article about why IoT should switch from the now dominant centralized (server - client) model to a decentralized peer-to-peer solution.

\n

Reasons against centralization:

\n
\n

Connection between devices will have to exclusively go through the internet, even if they happen to be a few feet apart.

\n

[...]

\n

it will not be able to respond to the growing needs of the huge IoT ecosystems of tomorrow

\n

[...]

\n

Existing IoT solutions are expensive because of the high infrastructure and maintenance cost associated with centralized clouds, large server farms and networking equipment. The sheer amount of communications that will have to be handled when IoT devices grow to the tens of billions will increase those costs substantially.

\n

[...]

\n

Even if the unprecedented economical and engineering challenges are overcome, cloud servers will remain a bottleneck and point of failure that can disrupt the entire network. This is especially important as more critical tasks such as human health and life will become dependent on IoT.

\n
\n

Instead the article suggests a decentralized approach to IoT networking using peer-to-peer communication. But:

\n
\n

However, establishing peer-to-peer communications will present its own set of challenges, chief among them the issue of security.

\n
\n

Other source, like How Is P2P Becoming the IoT Nightmare? also mentions security as a problem.

\n
\n

It seems that the P2P (peer-to-peer) communication capabilities embedded into some Internet of Things devices turns into a security headache for users.

\n
\n

So security is clearly an issue to be solved, but what else should I care about when choosing to use a decentralized P2P network?

\n

I am interested in general limitations, risks, and issues that can be used as points of comparison when I want to decide between centralized and decentralized network.

\n", "Title": "Disadvantages of decentralized peer-to-peer networks in IoT", "Tags": "|security|networking|", "Answer": "

Although decentralised networks often look appealing as a solution, there are a few compelling advantages of centralised networks which make them far more popular at the minute.

\n\n

Design Overhead

\n\n

Understanding, programming and setting up a decentralised network is often much more challenging than a traditional centralised (client-server) model in the majority of cases. Take, for example, a mesh network (the topology used by ZigBee, Z-Wave and Thread), which tends to be structured like so:

\n\n

\"Mesh

\n\n

(image from Wikipedia, in the public domain)

\n\n

Nodes in the network must be capable of sending messages for themselves, but they must also be able to route messages across the network, which is more more complex, since each node must be able to calculate a route to the destination. The IETF has a rather interesting presentation on the routing protocols used in 6LoWPAN, a mesh network protocol that uses IPv6. As you can see, the design is far more involved than a traditional star network connected to a Wi-Fi router, and of course each node will need more computational power to handle the additional processing steps needed to participate in the mesh network.

\n\n

The main problem of all of this is that each node will require more processing power, and hence cost more. A TechTarget article addresses this more generally:

\n\n
\n

Centralization is an attempt to improve efficiency by taking advantage of potential economies of scale: improving the average; it may also improve reliability by minimizing opportunities for error. Decentralization is an attempt to improve speed and flexibility by reorganizing to increase local control and execution of a service: improving the best case.

\n
\n\n

The other side effect will be that the increased computation will lead to a higher power consumption (although this tradeoff may not be that significant compared to long-range transmissions which would otherwise be necessary).

\n\n

Security

\n\n

Unless security is built-in to the design of the decentralised protocol, it can easily pose a big problem. Since nodes will be passing data between each other, it's difficult to guarantee the integrity of each packet, since one router node could easily replace one packet with another, and the receiver would never be able to tell the difference. A Chron article describes this issue well:

\n\n
\n

If a computer becomes infected or malicious computer enters a mesh network, it can pretend to be a trusted member of that network and then modify sent data and disrupt how the network passes information. In a black hole attack, information passing through the infected computer will not continue through the network, blocking the flow of data. In gray hole attacks, some data may be blocked, while other data is allowed, making it seem like the computer is still a working part of the network. Wormhole attacks are harder to detect: They tunnel into a network computer from the outside and pretend to be other nodes in the network, essentially becoming invisible nodes. They can then monitor network traffic as it passes from one node to the next.

\n
\n\n

Economies of Scale

\n\n

Large cloud platforms such as AWS or Azure allow you to set up a centralised server for a price that is dirt cheap - Amazon and Microsoft have the benefit of running truly immense server farms which allows them to provide server space for a very low price. Have a play with the Azure pricing calculator to see what I mean - you can run a function 1 million times in a month, using 128MB of RAM and taking 5 seconds per execution for \u00a32.20/month, and scaling up the capacity is trivially easy.

\n\n

On the other hand, scaling up a decentralised network requires adding more and more nodes, and I expect that you would start to see diseconomies of scale, since the nodes would need to spend more and more time routing data rather than performing useful computation.

\n\n

In summary, although decentralised networks looks like a perfect solution, they do have significant disadvantages, which is why many IoT developers still favour centralised solutions.

\n" }, { "Id": "439", "CreationDate": "2016-12-22T18:17:24.193", "Body": "

Thread have produced a document about their protocol, Thread Stack Fundamentals, which I've been reading to try and understand more about how Thread works.

\n\n

On page 5, the document explains that despite having no single point of failure, a Leader is needed to make decisions for the network:

\n\n
\n

A Router or Border Router can assume a Leader role for certain functions in the Thread Network.\n This Leader is required to make decisions within the network. For example, the Leader assigns\n Router addresses and allows new Router requests. The Leader role is elected and if the Leader\n fails, another Router or Border Router assumes the Leader role. It is this autonomous operation\n that ensures there is no single point of failure.

\n
\n\n

How is the Leader elected by the devices in the Thread network? Is there a set of criteria that are evaluated when the devices 'vote' for or select the Leader?

\n", "Title": "How does Thread elect a Leader device?", "Tags": "|thread|", "Answer": "

Great question! I found an article from radio-electronics.com which really helps explain in some more detail how Thread works. Basically, the first eligible router node self-designates as the leader.

\n\n

In other words, when a node is added, if it is unable to find a leader in the system, it will automatically designate itself as the leader. Otherwise, it will fall into line under the existing leader node. I quote from the article referenced above:

\n\n
\n

Router Eligible nodes become routers if they are needed to support the mesh. The first Router Eligible node to form the network will be autonomously designated a router as well as the Leader. A Leader performs additional network management tasks and makes decisions on behalf of the network. Other Router Eligible nodes in the network can assume the role of a Leader, but there is only one Leader per network at a given time.

\n
\n\n

In other words, it's a one man machine election. Not very democratic, but in computers, it works.

\n" }, { "Id": "447", "CreationDate": "2016-12-23T17:48:03.313", "Body": "

ANT/ANT+ is a proprietary but open access multicast wireless sensor network technology. It's data rate and the resulting application throughput of 20 to 60 kBit/s is significantly reduced compared to its competitors, i.e. Bluetooth and ZigBee. For applications that get along with that restriction and a physical range that is comparable to other wireless network systems it might well be an interesting alternative. It would seem that it is primarily used by sports and fitness sensors by a number of manufacturers.

\n\n

This Wikipedia page states that:

\n\n
\n

Ger\u00e4te ben\u00f6tigen beim Empfang oder Senden weniger als 50 mW Leistung. Da sie die meiste Zeit im Sleep-Mode verharren, ist die Gesamtstromaufnahme gering.

\n
\n\n

Which roughly translates to:

\n\n
\n

Devices require less than 50 mW of power when receiving or transmitting. Since they remain in sleep mode for most of the time, the total current consumption is low.

\n
\n\n

It focuses on ANT being specifically well suited for low power sensor networks with less than 50 mW of power consumption during transmissions and being in sleep mode most of the time.

\n\n

However, one would expect any battery powered appliance (and even more so devices powered by energy harvesting) making heavy use of deep sleep modes during times of inactivity. I wonder how a \"real life\" sensor network using ANT would compete against other technologies such as Bluetooth low energy in terms of power consumption?

\n", "Title": "Power consumption of ANT/ANT+ compared to other wireless sensor network technologies", "Tags": "|wireless|power-consumption|ant|", "Answer": "

The reason you see such a difference between ANT and BLE is the use case is very different for ANT. Sports use cases are looking for data from a \"UDP\" type of view where they just want the most recent information and don't need to send much information but they need to send it often.

\n\n

For example, BLE can behave similar to a message based system (as is Zigbee) where you only transmit if you have something to say, ANT does not do that. ANT transmits on period, and if you have nothing new to send, then the radio will send the last message again.

\n\n

Therefore I would recommend using the radio that best suits your use-case to get the best power consumption.

\n\n

Note: also ANT+ is simply an application layer overtop of ANT and has little to no effect on power consumption except for defining channel configuration.

\n" }, { "Id": "450", "CreationDate": "2016-12-23T17:50:13.340", "Body": "

In Thread Stack Fundamentals, the four types of devices in a Thread network are outlined on page 5/6:

\n\n\n\n

It is implied by the document that Routers must be able to constantly supply themselves with electricity (hence REEDs must also be able to run constantly if they are upgraded to Routers). However, can REEDs save power and sleep if they aren't currently being used as routers, or must they always be on in case they are needed?

\n\n

Take, for example, a smart TV supporting Thread, currently acting as a REED. Since no other devices depend on it while it is a REED, can it suspend its network connection and act as a Sleepy End Device, or does the networking component need to be on all the time?

\n", "Title": "Can Router-eligible End Devices sleep in Thread networks?", "Tags": "|thread|", "Answer": "
\n

Can it suspend its network connection and act as a Sleepy End Device, or does the networking component need to be on all the time?

\n
\n\n

They can be sleepy end devices! but, the Network device is still ON.

\n\n

According to the definition \"Sleepy end devices are host devices. They communicate only through their Parent Router and cannot forward messages for other devices\" i.e, their network device is not off!

\n\n

Sleepy end devices will be communicating with their parent router.

\n" }, { "Id": "452", "CreationDate": "2016-12-24T10:49:17.220", "Body": "

I am about to set up an MQTT network at home. I want to build up some knowledge by practical exercises. It would be a small network with the broker hosted on my laptop (Windows 7) and some Raspberry Pi powered client. Also I am thinking about making a client on my phone (Android).

\n

My goal is to have a simple network on which I can experiment and I want to perform some security testing, experimenting first.

\n

I have found an MQTT Server Test Suite which is designed to act as a malicious MQTT client. It is pretty promising to start with.

\n
\n

Test tool general features

\n\n
\n

I am also interested in some more simple practices I can use to verify MQTT security features. What are the simplest ways for a beginner to perform some basic security verification on an MQTT network?

\n", "Title": "What simple security tests can I perform on my MQTT network?", "Tags": "|security|mqtt|mosquitto|testing|", "Answer": "

Some ideas - I've not covered all combinations of with/without username/TLS, hopefully you can see where they are missing.

\n\n

Can a client connect anonymously, no TLS?

\n\n
mosquitto_sub -t test/topic -h <broker address>\n
\n\n

Can a client connect if it provides a username but no password, no TLS?

\n\n
mosquitto_sub -t test/topic -u <username> -h <broker address>\n
\n\n

Can a client connect if it provides a username and a password (correct or not), no TLS?

\n\n
mosquitto_sub -t test/topic -u <username> -P <password> -h <broker address>\n
\n\n

Can a client subscribe to the $SYS topic and see information about the broker?

\n\n
mosquitto_sub -t '$SYS/#' -v -h <broker address>\n
\n\n

Can a client connect using TLS?

\n\n
mosquitto_sub -t test/topic -h <broker address> -p 8883 --capath /etc/ssl/certs\n
\n\n

Can a client subscribe to all topics? What does it see?

\n\n
mosquitto_sub -t '#' -v\n
\n\n

Repeat all the above when publishing as well.

\n" }, { "Id": "458", "CreationDate": "2016-12-26T17:57:22.777", "Body": "

When booting, the Amazon Echo plays a sound and says 'hello', which is usually not a problem, but I'm concerned that if I have a power failure in the night, the Echo might reboot and play a loud sound, waking me up.

\n\n

One user seems to have experienced a similar problem on the Amazon forum:

\n\n
\n

[OP]: I'd prefer Echo didn't make a startup sound and say \"hello\" after a power outage. Is there a way to disable the startup sounds?

\n \n

[reply]: Do you really have so many power outages that this is a significant problem? I've only had one power outage in the past two years.

\n \n

[OP]: Yes. Construction-related. Likely to continue for at least a year.

\n
\n\n

Understandably, in a situation such as this, it is not at all desirable for the sound to be playing frequently at night. One reply suggested an uninterruptible power supply (UPS), but I would like to avoid this if possible because they are quite expensive.

\n\n

How can I stop my Echo from playing the startup greeting? Is it supported at all (or is there a suitable workaround)?

\n", "Title": "How do I stop an Amazon Echo from making a sound after restarting?", "Tags": "|amazon-echo|", "Answer": "

I'm a few hundred kilometers away from my Echo, so unfortunately I can't test anything. I found no way to deactivate the sound altogether. If I recall correctly however that waking hello is tied to the usual volume levels one sets.

\n\n

Fortunately that volume level is different from the timer and alarm volume levels. So one could at least over night set the speech volume level to zero and still be woken up by a reasonably volume level for the alarm clock. However you'd have to reset the speech volume level back up in the morning, manually\u2014well, by voice.

\n\n

Of course, that's only necessary if the Echo is the alarm clock. If not, one could always have the echo on a timed power plug shutting its power off completely over night. Maybe one could even connect the echo power supply to the alarm clock, but that'd be a different question.

\n" }, { "Id": "459", "CreationDate": "2016-12-26T19:41:54.513", "Body": "

Google states that the Google Assistant (the personal assistant that runs on the Google Home, Pixel and the Allo app) uses 'conversation history' for targeted advertising:

\n\n
\n

Does Google use my conversation history to personalize the ads I see?

\n \n

If you interact with the Google Assistant, we treat this similarly to searching on Google and may use these interactions to deliver more useful ads. You can delete past interactions with your Assistant at any time.

\n
\n\n

What exactly is Google using when it says 'conversation history' - is Google Home listening to everything so it can target advertisements or are my queries only stored after saying 'OK Google'?

\n\n

Some sources suggest that the Google Home might even be recording what I listen to on the TV:

\n\n
\n

\"That microphone will be a witness to every verbal interaction in the home. It will also know what you watch on TV, what you listen to, and, obviously, when there's no one home.\"

\n \n

- Computerworld

\n
\n", "Title": "Does the Google Home record conversations to target advertisements?", "Tags": "|google-home|privacy|google-assistant|", "Answer": "

On the Voice & Audio section of the Google My Activity page, you can see your history. The recordings it keeps are only after triggering it to wake up with the wake word (OK Google). It also has the ability to delete your history.

\n" }, { "Id": "460", "CreationDate": "2016-12-26T20:08:51.040", "Body": "

This article quotes the CEO of Image Ware,

\n\n
\n

[The solution] according to Miller, is multi-modal biometrics which he claims makes\n it virtually impossible for the wrong person to access computer\n systems.

\n
\n\n

His company uses existing hardware and platforms, connecting physical feature recognition algorithms (finger, palm, hand and prints, and face, eye, iris ) with other algorithms employing common biometric data sensors found on today's mobile devices.

\n\n

My gut feeling is that he has overstated this somehow, but I can not put my finger on why this rings untrue. It seems to me that if a multi-sensor approach was truly effective we would see hardware and software for such strategies everywhere by now.

\n\n

Can an IoT network of diverse sensors be an efficient and effective security strategy? (Is the multi-sensor approach effective? )

\n\n

What are the pitfalls?

\n", "Title": "Will multiple simultaneous biometric sensors create unbreakable security for devices?", "Tags": "|security|sensors|machine-learning|biometrics|", "Answer": "

First, the quote seems to have been about securing mobile devices, not about \"an IoT network of diverse sensors\", but some lessons can perhaps still be drawn.

\n\n

Unlike with a mobile device, an \"IoT network\" of sensors tends to imply that they aren't all in the same place, so a user likely can't be expected to qualify in the judgement of all of them at once. This means that a system would need to be very provisional about the authenticity of the user - in effect:

\n\n
\n

You walk like Joe and know Joe's password, so maybe you are Joe, and I'll let you do Joe's less critical things unless I start suspecting you aren't Joe, but to do something more critical you're going to have to go here and do this, and go there and stare into that, and repeate the following phrase, and...

\n
\n\n

But as critically, and in common with the mobile device case, such a scheme only secures the front door. It offers no protection against at least three other types of vulnerability.

\n\n\n" }, { "Id": "463", "CreationDate": "2016-12-26T20:58:24.230", "Body": "

As the title says: What is the difference between the Internet of Things and the Internet of Everything and should I care?

\n\n

I came cross two concepts the Internet of Things and the Internet of Everything. Can anyone help me understand: How do the two topics differ from each other?

\n\n

If you have time to watch a 20 minute video, this is where I first got the topic from: Introduction to the Internet of Everything by Eli the Computer Guy.

\n", "Title": "What is the difference between the Internet of Things and the Internet of Everything?", "Tags": "|definitions|", "Answer": "

What Internet of Everything's aim is:

\n\n
\n

connecting the unconnected \u2014 people-to-people (P2P),\n machine-to-people (M2P), and machine-to-machine (M2M) \u2014 via the\n Internet of Everything (IoE).

\n
\n\n

So basically it seems to be a broader term, the next generation of IoT according to Cisco.

\n\n

The point is that all devices will have direct connection to the Internet, while in IoT the devices are members of an Internet-Like-Network but not necessarily the Internet itself.

\n\n

What makes it possible is the IPv6 which will allow countless devices to have its own IP address.

\n\n
\n

Second, barriers to connectedness continue to drop. For example, IPv6\n overcomes the IPv4 limit by allowing for \n 340,282,366,920,938,463,463,374,607,431,768,211,456 more people,\n processes, data, and things to be connected to the Internet.\n Amazingly, IPv6 creates enough address capacity for every star in the\n known universe to have 4.8 trillion addresses.

\n
\n\n

So IoE emphasizes TCP/IP connection and larger, more comprehensive networks.

\n\n

But all in all there are not much differences at all. Using only TCP/IP in IoE exclude a lot of IoT platforms and devices but the concept does not change much.

\n\n

Also the targeted connection of M2M, P2P, M2P does not make a significant difference either.

\n" }, { "Id": "480", "CreationDate": "2016-12-27T14:37:44.390", "Body": "

I've been reading about the 6LoWPAN protocol (which is used by Thread, among other network protocols), and it seems to be highly useful for networking, and has the advantage of allowing each device to easily be addressable.

\n

Wikipedia says that 6LoWPAN uses a form of header compression to reduce transmission size (hence saving time and energy):

\n
\n

The target for IP networking for low-power radio communication is applications that need wireless internet connectivity at lower data rates for devices with very limited form factor. An example is automation and entertainment applications in home, office and factory environments. The header compression mechanisms standardized in RFC6282 can be used to provide header compression of IPv6 packets over such networks.

\n
\n

It refers to RFC 6282 as the compression format used. The abstract is rather brief in how it works:

\n
\n

The compression format relies on\nshared context to allow compression of arbitrary prefixes. How the\ninformation is maintained in that shared context is out of scope.

\n
\n

As far as I can tell, 'shared context' is used to elide some header fields and save space. How, exactly, is this 'shared context' managed, and why doesn't every IPv6 device (e.g. my computer) use this compression?

\n", "Title": "Why does 6LoWPAN use additional header compression?", "Tags": "|networking|6lowpan|compression|", "Answer": "

The RFC draft explains a bit better how the header compression works. What is described as arbitrary prefixes in the abstract is essentially a bunch of information that is assumed to be in a certain range or having a specific value. Thus, making transmitting those information unnecessary.

\n
\n

To enable effective compression LOWPAN_IPHC relies on information pertaining to the entire 6LoWPAN.

\n
\n

This assumption goes for the entire 6LoWPAN, breaking down the next sentence we have six information that are assumed to be known or in a considerably smaller range of values.

\n
\n\n
\n\n

(Bullet points by me, otherwise continuing section three of the draft.)

\n

This also explains why this compression is not used by devices like your PC. Your PC has to be able to address the whole world basically. However the compression as described above is only really effective if used in a sufficiently constrained environment\u2014the entire 6LoWPAN\u2014and not the world.

\n
\n

In the best case, the LOWPAN_IPHC can compress the IPv6 header down to two octets (the dispatch octet and the LOWPAN_IPHC encoding) with link-local communication.

\n
\n

(Still same draft section)

\n

As described in the RFC draft the compression gets even better when it's in a more constricted environment. For link local only it's down to two octets. Of course, we want our usual computers and end devices to be able to address everything else instead. However, if we want to be able to change all those values this compression takes as granted it doesn't work anymore. We'd have a mismatch between what we mean and what every other network participant thinks we mean, because the compression takes values for granted.

\n

Thus, the compression severely limits the vast possibilities of IPv6 to reach smaller headers and gain efficiency in time and energy in a very well defined small environment.

\n" }, { "Id": "496", "CreationDate": "2016-12-27T20:41:07.707", "Body": "

How can I use 2FA (two factor authentication) when I connect a new device to the broker, if it is even possible?

\n\n

Because it seems easier, the second factor can be a software solution first but I would welcome ideas on how to introduce hard tokens (RFID maybe).

\n\n

It would make sense if the devices should authenticate only at the first connection and server would remember \"old\" clients.

\n\n

The idea maybe unusual or unsuitable - if it is a bad idea, please give me the reasons why.

\n", "Title": "How can I use 2FA in an MQTT network?", "Tags": "|security|mqtt|authentication|", "Answer": "

To achieve 2FA in MQTT network I have created following services for authentication which are connected to Broker.

\n\n
    \n
  1. ID verifier
  2. \n
  3. Token Generator
  4. \n
  5. Token Verifier
  6. \n
\n\n

When MQTT client connects to broker over SSL/TLS, it first publishes its own ID to device_id topic, the ID Verifier verifies that it is the authentic client and then Token Generator is invoked which generates a token and publishes the token on locked topic device_token.

\n\n

The client device gets this token and further publishes it to a topic verify_token. As soon as the topic is published on verify_token the token verifier compares the values at topic device_token and verify_token if it matches it add the device's id to verified device pool and allow the device publish data. This improves security because the only verified devices gets connected to the topics to publish data.

\n\n

I have also used MQTT_KEEPALIVE configuration option to keep the client active when no data is being sent or received to keep the client device alive in device pool and prevent it from being verified again once it is added to device pool. however for security purpose I froce the device to 2FA every 24 hours.

\n" }, { "Id": "498", "CreationDate": "2016-12-28T04:25:34.443", "Body": "

I've been reading about Nest's various devices, and one particularly useful device has come to my notice: Nest Protect. However, one thing still puzzles me. It says on their web page that:

\n\n
\n

The new Nest Protect has been redesigned from the inside out. It has an industrial-grade smoke sensor, tests itself automatically, and lasts up to a decade.

\n
\n\n

How does the alarm auto test? Is this just a battery check or is this actually testing whether the alarm will go off in a smoky environment? Being an IoT device, I would naturally assume that it is measure more than just a simple battery level, but what and how?

\n", "Title": "How does the Nest Protect autotest?", "Tags": "|smart-home|testing|nest-protect|", "Answer": "

The Nest Protect seems to have three self-testing mechanisms:

\n\n\n\n

The automatic testing mechanisms cannot be disabled:

\n\n
\n

Self Test helps ensure that your Nest Protect is working to keep you safe, and gives you essential information when you need it, so it can't be disabled.

\n
\n\n

All of the test results are collected into the Nightly Promise, which verifies that there are no problems with your smoke/CO detectors before you sleep. If the Nest Protect glows yellow at night, this indicates a problem that requires attention (e.g. battery needs replacing, fault with sensors). You can view this in the app to see the exact problem.

\n\n

The Nightly Promise automatically shows as soon as you dim the lights at night - no button is required for it to display.

\n\n

In regards to answering how the detector self-tests, it's a little harder to say. Naturally, the detector can't just burn something to test if it's working or not, so I can only assume that the test procedure functions similarly to a normal smoke alarm.

\n\n

The Nest Protect uses an optical detector, so it may be worth following the advice at How can I safely test my (optical) smoke alarm? on Home Improvement Stack Exchange if you want to test the device's reaction to actual smoke.

\n" }, { "Id": "499", "CreationDate": "2016-12-28T10:48:46.470", "Body": "

I'm planning to implement my own home automation system. In will contain the central Raspberry PI server and a number of sensors and switches based on 8-bit PIC16 microcontrollers which are communicating to the central Raspberry PI over the radio (using nRF24L01, 2.4GHz).

\n\n

As an example consider PIC16F1705 with 16k ROM and 1k RAM.

\n\n

In order to secure the system I need some cryptographic algorithms, like

\n\n\n\n

Now my questions are:

\n\n\n\n

For example, Advanced Encryption Standard (AES) in my understanding can't be implemented due to the RAM restriction.

\n", "Title": "Cryptographic algorithms for PIC16 microcontrollers", "Tags": "|security|microcontrollers|pic|cryptography|", "Answer": "

You may be interested in the Skein family of cryptographic hash functions, which are designed to be efficiently implemented on a wide variety of small and large processors. You can trade RAM for speed, or vice versa. The hash can be implemented with as few as 100 bytes of state. The Skein primitive is the basis for both hashing and encryption.

\n\n

The home page has a post offering a freely available PIC implementation; although I didn't find the link, you can probably search online for it.

\n" }, { "Id": "503", "CreationDate": "2016-12-28T14:58:17.327", "Body": "

I've been using IFTTT for various tasks, and it works relatively well, but for the triggers I've been using (RSS feed updates) the update time is quite slow (up to 15 minutes of delay after the feed is updated. For high-frequency feeds, 15 minutes is a severe delay, but in my use case this is manageable, so I'm not too concerned.

\n\n

For other triggers and recipes I want to create, I need to be able to respond more quickly to an event. For example, if I receive an email, I'd like IFTTT to perform an action much more quickly (within 5 minutes at most).

\n\n

Ignoring any issues with IFTTT being down, how can I determine how quickly a trigger will fire? Is the time to react based on the applet (I assume some applets might have push triggers, so IFTTT will be able to respond more quickly)?

\n", "Title": "How delayed can IFTTT triggers be?", "Tags": "|ifttt|", "Answer": "

15 minutes after ringing my Ring doorbell (notifications from the Ring app come literally instantly) my lights (also instant from the app) blink.

\n\n

20 seconds after pressing a Amazon AWS Dash button, the function is executed.

\n\n

Point is: The delay can be anywhere from 5 seconds, to 50 minutes. Do not rely on IFTTT.

\n" }, { "Id": "504", "CreationDate": "2016-12-28T20:11:52.857", "Body": "

In MQTT it is the client who initiates the connection with a CONNECT message.

\n\n

\"MQTT

\n\n

The first field of the packet is the clientId:

\n\n
\n

The client identifier (short ClientId) is an identifier of each MQTT client connecting to a MQTT broker. As the word identifier already suggests, it should be unique per broker. The broker uses it for identifying the client and the current state of the client. (Image and quote are taken from from here.)

\n
\n\n

Now let's say I have two clients, client X and Y in the following situation.

\n\n
    \n
  1. Broker launched, no clients yet.
  2. \n
  3. X succesfully connects to the broker with client-1 id, username is X.
  4. \n
  5. Now, Y tries to connect using client-1 as id, username is Y.
  6. \n
\n\n

What will happen?

\n\n
    \n
  1. Based on the clientId, the broker will think that X performs a repeated connection attempt which is abnormal behavior.
  2. \n
  3. Nothing extraordinary will happen. Y connects successfully as it uses a different username.
  4. \n
  5. Nothing extraordinary will happen. The broker will reject Y connection attempt as the given clientId is already in use.
  6. \n
\n", "Title": "What will be the result of the following connection scenario in an MQTT network?", "Tags": "|mqtt|", "Answer": "

If the clientid is the same, in MQTT, the spec says you must consider them to be the same client!\nProbably Y should be connected using the Id and X should be disconnected.

\n\n

This part is from the documentation:

\n\n
\n

If validation is successful the Server performs the following steps.

\n \n
    \n
  1. If the ClientId represents a Client already connected to the Server then the Server MUST disconnect the existing Client\n [MQTT-3.1.4-2].

  2. \n
  3. The Server MUST perform the processing of CleanSession that is described in section 3.1.2.4 [MQTT-3.1.4-3].

  4. \n
  5. The Server MUST acknowledge the CONNECT Packet with a CONNACK Packet containing a zero return code [MQTT-3.1.4-4].

  6. \n
  7. Start message delivery and keep alive monitoring.

  8. \n
\n
\n\n

Look this documentation for more details.

\n" }, { "Id": "512", "CreationDate": "2016-12-29T13:40:07.297", "Body": "

Many manufacturers of 'smart lighting' systems claim that connecting your lights to the Internet of Things will save energy. For example, Samsung SmartThings use energy savings as a key selling point, and have a case study that promises large savings:

\n\n
\n

All in all, after one season of using SmartThings, our home utility bill was $78 less than it was the season before.

\n
\n\n

Philips Hue also call their bulbs 'energy-efficient', although it isn't such a selling point for the Hue compared to how much SmartThings promote energy savings.

\n\n

I would have thought that the majority of energy savings from turning off the lights would be negated by the extra energy used by the processing hubs and wireless radios in every device, as well as the energy requirements of any motion sensors you install.

\n\n

Is it true that energy can be saved (in a typical home situation, assuming the same number and type of bulbs before/after) by using 'smart lighting' such as SmartThings or Hue, or are the benefits exaggerated in terms of energy consumption?

\n", "Title": "Do Internet-connected lighting systems save energy overall?", "Tags": "|smart-home|power-consumption|", "Answer": "

Having used X10 and now Zwave for complete home automation (I mean everything, doors, blinds, lights, towel rails, water pump, hot water, heat pumps etc), I would sit on the fence.

\n

You may need a server (or PC) running all the time, and a UPS for it too. Each light is controlled by a device which is running all the time. Some have physical relays in them, which consume power when activated.

\n

Yes, you are saving power by turning LED lights off when not needed, but this shortens their lifespan drastically. On the face of it they are rated for 30,000 to 50,000 hours, but no one mentions that there are a limited number of switching cycles. Within 7 years, 10% of my LEDs have failed. The ones that have failed are the ones that are switched on/off based on motion and the one on dimmers.

\n

The rated cycles can be as low as 20,000. If you turn each light on and off 5 times day, it gives you a life of 11 years.

\n

In my case, I am using sealed LED downlights, so I have to replace the whole fitting, which blows teh saving out of the water.

\n" }, { "Id": "526", "CreationDate": "2016-12-30T18:30:39.377", "Body": "

I've been considering using openHAB recently as my home automation system, but I'd like to connect a Google Home to it so I can control the system with my voice.

\n\n

It looks like openHAB support Amazon Alexa through the openhab-alexa skill, so with an Echo I could issue voice commands and receive simple voice messages, but I'd like to use a Google Home instead.

\n\n

I've checked the Supported Technologies page on the openHAB website, but it looks like there's nothing there for the Google Home/Assistant. Is it possible to connect my Google Home to openHAB? If possible, I'd like to connect directly, but I would be happy with connecting it through a different service if necessary.

\n", "Title": "Does openHAB support the Google Assistant?", "Tags": "|google-assistant|openhab|", "Answer": "

Very old question, but still active, so it deserves an updated answer.

\n

Yes, OpenHAB 3 is fully integrated with Google Assistant via their myopenHAB.org cloud bridge.

\n

https://www.openhab.org/docs/ecosystem/google-assistant/#google-assistant-action

\n" }, { "Id": "528", "CreationDate": "2016-12-31T14:39:51.280", "Body": "

I have set up Mosquitto MQTT on my Windows 7 laptop. I have performed the installation process according to this step by step guide.

\n\n

Installation was alright and I could start the Mosquitto Broker's service by using C:\\Windows\\system32\\services.

\n\n

\"windows

\n\n
\n\n

Now what I want is to be able to launch the service from Windows 7 command prompt. In everycase I tried to run the commands from the install directory of Mosquitto (D:\\..\\MQTT\\mosquitto>).

\n\n
    \n
  1. First I have tried the following command according to the documentation:

    \n\n
    mosquitto -d\n
    \n\n
    \n

    -d, --daemon

    \n \n

    Run mosquitto in the background as a daemon. All other behaviour remains the same.

    \n
    \n\n

    Turned out that I cannot do this on Windows.

    \n\n
    1483193297: Warning: Can't start in daemon mode in Windows.\n
  2. \n
  3. After, I have tried a command shared on this site.

    \n\n
    mosquitto \u2013p 1883 \u2013v\n
    \n\n

    This one started the broker but not the background service. I have checked the service among the Windows services, and Mosquitto Broker was not started.

  4. \n
\n\n
\n\n

Does anyone know the proper way of starting Mosquitto broker's service from Windows 7 command prompt?

\n", "Title": "How to start Mosquitto broker service on Windows 7 from command prompt?", "Tags": "|mqtt|mosquitto|microsoft-windows|", "Answer": "

I finally succeeded in finding the correct command on this site. It is:

\n\n
net start mosquitto\n
\n\n

It can be run from any directory. If you receive the following error:

\n\n
D:\\..\\MQTT\\mosquitto>net start mosquitto\nSystem error 5 has occurred.\n\nAccess is denied.\n
\n\n

then you need to run the command prompt as an administrator. In case of success the following response will be shown.

\n\n
D:\\..\\MQTT\\mosquitto>net start mosquitto\nThe Mosquitto Broker service is starting.\nThe Mosquitto Broker service was started successfully.\n
\n" }, { "Id": "530", "CreationDate": "2016-12-31T15:09:40.377", "Body": "

I've been researching how to control an Xbox One through Alexa, but I can't find a straightforward way to achieve this. A reddit thread suggests using the Blumoo remote, but I'd rather not pay for more hardware if I can avoid it.

\n\n

This article from InsideGamer suggests that there is some sort of remote control ability:

\n\n
\n

Microsoft\u2019s August Xbox One update brings with it a long awaited feature for Xbox owners, the ability to wake your console to automatically download games, addons and DLC when you aren\u2019t in proximity to it.

\n
\n\n

Is there some form of Wake-on-LAN I can use, perhaps? I'd be happy with any method to turn it on wirelessly though, even if I have to use something like IFTTT to trigger it.

\n", "Title": "How can I boot an Xbox One remotely using Amazon Alexa?", "Tags": "|alexa|wake-on-lan|microsoft-xbox|", "Answer": "

I know you wrote that you would rather not purchase additional hardware, yet at least this is an option. You can easily use the Harmony hub, here is how to do it: Controlling Your Entertainment System with Alexa and the Harmony Hub @ VoiceDesigned.com

\n" }, { "Id": "543", "CreationDate": "2017-01-01T19:05:42.300", "Body": "

The OpenFog Consortium, a group that is working on an open fog computing specification, has published a white paper about their architecture.

\n\n

They define a fog node as:

\n\n
\n

The physical and logical network\n element that implements fog computing\n services. It is somewhat analogous to\n a server in cloud computing.

\n
\n\n

However, I'm having a bit of trouble intuitively understanding what a fog node would be. Would it just be a server or hub device hosted locally on your own network, rather than in a massive server farm? Would a smart home hub (like a SmartThings Hub) be a very simple example of a fog node under the OpenFog definition?

\n", "Title": "What would a \"fog node\" consist of in OpenFog?", "Tags": "|definitions|fog-computing|openfog|", "Answer": "

What it looks like they're trying to do is establish semantic definitions for messages and rules that can be interpreted or processed at any layer, and that the layers can be migrated up or down. So yes, a home hub could be a fog node, but only if it supports the fog behaviors and messages.

\n\n

Think of a typical home automation architecture:

\n\n
Remote -<Z-wave>- light switch -<Z-wave>- home hub -<Ethernet>- cloud server -<GSM>- phone\n
\n\n

You trigger the remote and it sends a Z-wave message to the switch, which turns on the light. You can configure your home hub to send a message to the switch based on other rules (the garage door is opened.) And your watch can talk to your phone which can contact the company's cloud, which sends the message to your home hub, which sends it to your light switch.

\n\n

Today, you can do the first part of this by setting up a scene in your light switch that is triggered by a paired remote. Using Z-wave, no home hub is needed. You can add a home hub and set up a new scene to trigger the light switch if the remote's button is pressed, but you better delete the scene in the light switch first. You can use a cloud interface to configure your hub's behavior, perhaps triggering your hub via IFTTT. And you can also use your phone to contact a web interface on the cloud and have it turn on the switch.

\n\n

What OpenFog is aiming to do is to have those device definitions be universal, the rules be platform independent, and the messages be transport independent. They'll share common security and authentication methods. That means you set it up once, no matter where you are, and the definitions and rules are migrated up and down the architecture.

\n\n

From your phone you could view your devices, which would include the light switch and remote control, and say \"I want button 1 on the remote to toggle the light switch\". The rules might be created right there in your phone, then transported to the cloud server. The cloud server could examine the inputs and decide \"All the responsibility for lights and remotes belongs at a lower level\", and push the rules down to the home hub. The home hub could say \"hey, Z-wave knows how to run a scene in this model of light switch, so I will push these rules to the light switch and remote.\" Next time you push the button on the remote, the signal will be caught by the light switch and the light will turn on. This would provide the fastest possible response time (no hub needed). And by having the rules backed up at a cloud level, if you have to replace a defective light switch, none of the rules need to change. They'd just be pushed back down to the replacement switch.

\n\n

OpenFog also will provide for elastic scale. Let's say you place 1000 light switches in an office building, using a network technology that has a latency that goes up exponentially based on the number of nodes. It could scale communications such that the 1000 nodes are split into 10 networks, so that no network ever has a latency longer than 200 milliseconds.

\n\n

It also specifies scalability of control. If you've ever worked with systems built for GUI control you get used to instructions like these: \"right click on the node, pick 'settings', then set intensity to '75%', then 'OK'\". Such instructions are worse than useless when it comes to managing 60,000 nodes. OpenFog should enable automated groupings of nodes allowing scalable control. \"For all nodes in Eastern Standard Time zone, set intensity to 75%,\" or \"For all nodes in profit center 12, set intensity to 65%.\"

\n\n

It also specifies autonomy where possible. If the Peoria, Illinois branch replaces a furnace vent duct control, it shouldn't require an HQ person to delete the old control unit ID, then add the replacement ID. The local maintenance person should be able to do that herself. The security still has to ensure that the furnace repair person doesn't have the authority to disable the burglar alarm sensors on the back door.

\n\n

Now, place all of this behind open standards so that a Honeywell burglar alarm and a Trane heating system all interoperate in the same logical network as your Philips light bulbs, Leviton Z-wave switches, and your Fitbit scale.

\n\n

So, is your SmartThings hub an OpenFog node? Not today, and it won't be unless and until it implements and interoperates with these standards. But a future home hub certainly could be an OpenFog node.

\n" }, { "Id": "546", "CreationDate": "2017-01-02T00:22:15.090", "Body": "

Is it possible to configure an Amazon Echo so that Alexa will tell you about upcoming events (e.g. from Google Calendar) an hour before each of them by waking itself up and reminding you about them?

\n", "Title": "Can you setup Alexa to remind you about calendar events?", "Tags": "|alexa|amazon-echo|", "Answer": "

My experience at least with Outlook Calendar events. If I set the event to give me a reminder at a certain time prior to the event, my echo device will give me a voice prompts the time. For instance, I have an event tomorrow at 6PM. I set the event in Outlook Calendars to remind me 24 hours prior. My Echo just reminded me of this event today at 6PM. In short, this capability is driven by your calendar program.

\n" }, { "Id": "549", "CreationDate": "2017-01-02T11:57:42.960", "Body": "

Is there any way to make Alexa read Kindle books in the UK? I couldn't find any options to do that.

\n", "Title": "Can Alexa read Kindle books in the United Kingdom?", "Tags": "|alexa|amazon-echo|united-kingdom|", "Answer": "

I've just tried it with mine but she says I don't have any books in my Audible account. I do have books in my kindle account so I guess it is not rolled out in the UK yet.

\n\n

I wasn't trying to use Audible as I don't have an Audible account. I've now tried the keyword \"Listen to a Kindle book\" with a book name I have in my Kindle account. Alexa says she can't find the book. I've been into the Amazon UK website and tried to \"deliver\" the book to my Echo or Echo Dot but those options are currently greyed out unlike my Kindle reader devices

\n\n

Reading a blog post today it seems that you need to have books that Alexa can read aloud

\n" }, { "Id": "551", "CreationDate": "2017-01-02T12:35:09.203", "Body": "

Can you enable a new skill in Amazon Echo by voice?

\n\n

Ideally, I'd like to ask Alexa which skills they offer related to a specific category (like dictionary), then I'd like to enable one.

\n\n

Is this possible?

\n", "Title": "Can you teach Alexa a new skill by voice?", "Tags": "|alexa|amazon-echo|", "Answer": "

It is possible to enable any skill by voice, provided you already know the name of the skill.

\n

The Amazon documentation for adding skills is relatively straightforward:

\n
\n

If you know the exact name of the skill you want, you can say, "Enable [skill name] skill". Some skills may require you to link to an existing account and a separate subscription in order to use the skill.

\n
\n

However, this requires you to know the name of the skill beforehand, which may not be particularly useful. To get around this, you can take advantage of the Amazon-developed Skill Finder skill. This allows you to search through skills by voice with the following commands:

\n
\n

Alexa, tell Skill Finder to give me the Skill of the Day

\n

Alexa, tell Skill Finder to give me the newest skills

\n

Alexa, tell Skill Finder to give me top skills

\n

Alexa, tell Skill Finder to list categories

\n

Alexa, tell Skill Finder to list the newest skills in the education category

\n

Alexa, tell Skill Finder to list the top skills in the games category

\n
\n

You can enable this skill with "Alexa, enable Skill Finder skill", and then the above commands will be supported.

\n" }, { "Id": "554", "CreationDate": "2017-01-02T20:50:35.890", "Body": "

Some sites, such as this article on end-to-end encryption for IoT, suggest that all traffic sent across the IoT network should be encrypted, saying:

\n\n
\n

Enterprises, government agencies and other organizations should take adopt [sic] an \u201cencrypt-everything\u201d strategy to protect against IoT-enabled breaches.

\n
\n\n

I can understand the need to encrypt any data that could be confidential, such as the commands to lock/unlock a 'smart lock' device, but is it really necessary to encrypt everything, such as the sensor that reports the current thermostat reading?

\n\n

Is it simply the case that \"encrypt everything\" stops people from forgetting to encrypt data that really ought to be encrypted, or is there a real benefit from using cryptography, despite the extra power, time and cost of it?

\n", "Title": "Is there any advantage in encrypting sensor data that is not private?", "Tags": "|security|sensors|privacy|cryptography|", "Answer": "

In addition to other answers, if the data is sent in plaintext it can be modified.

\n\n

Apart from mentioned problems faking data can cause (turning heat to the max due to lying thermometer in the middle of hot summer might lead to fire hazard, for example) manipulating data can lead to compromise of IoT device, and everything that accesses it (for example, you notebook might be checking temperature, but HTML page showing temperature could be replaced in-transit with computer virus designed to infect your internal network, or JSON data might be modified to break into application reading malformed data etc).

\n\n

Not that implementing security is without its risks, especially in IoT world. Security is hard, and implementation of it usually vastly increases codebase, and with it number of bugs (and thus possible attack vectors / exploit opportunities). IoTs rarely get firmware upgrades, so when a IoT device without auto-update has a problem, it is almost guaranteed to provide Botnets with extra zombie machines.

\n\n

And yes, auto-upgrade itself is not without issues - from privacy issues to possibility that evildoers will take control of it if not implemented properly; but it should be lower risk than hoping your first firmware will be without any security bugs allowing attackers to increase their zombie ranks.

\n" }, { "Id": "568", "CreationDate": "2017-01-03T18:00:18.233", "Body": "

IoT enabled light bulbs have been on the market for a while now. The Philips Hue is probably the best known. But I think controlling bulbs directly is a rule-maintenance disaster waiting to happen. If a bulb goes out (and yes, LED bulbs do fail), you have to replace the bulb, and remember to update any scenes or other rules that control the bulb (or are triggered by the bulb.) Or if you move a bulb from fixture A in the kitchen to fixture B in the bedroom, (perhaps while cleaning), the rule that says \"Turn on kitchen lights\" will now illuminate the bedroom.

\n\n

That may not seem like a big problem today for those of us who understand the configurations of our home automation systems intimately, but imagine a home automation system set up by a professional integrator for a typical customer. The homeowner may not know how to change the rules, so replacing a lightbulb could cost them not only the price of the smart bulb, but an additional service call charge from the integration company. A smart switch or fixture solves this problem because the switch doesn't move with typical maintenance. (The switch offers the same problem of configuration if it fails and needs to be replaced, of course, but switches typically have better life expectancies than bulbs, which are generally considered consumables.)

\n\n

On the other hand, an IoT enabled light switch can't fully control every aspect of lighting the same way a smart light bulb can. A switch can do simple dimming for certain technologies of bulbs, but it can't control the color of the Hue bulb.

\n\n

Much worse, smart switches use different types of electronic circuitry to perform dimming, and must be carefully matched to the technology of the bulbs they are controlling. A typical older dimmer can dim only incandescent bulbs and not CFL or LED bulbs; some dimmers can dim both incandescent and CFL but not LED bulbs; some dimmers can control incandescent bulbs and LED bulbs, but not CFLs; and some dimmers can control inductive loads like halogen transformers, but not CFLs or LEDs! With incandescent bulbs being replaced because they're such energy wasters, this has been a real problem, too.

\n\n

So what's the most practical approach? Buy expensive bulbs that are directly controllable and expensive smart switches to control them, or buy cheap bulbs and just expensive smart switches, and give up on the idea of controllable color lighting?

\n", "Title": "Is it better to control smart lights or smart light switches/fixtures?", "Tags": "|smart-home|", "Answer": "

When the bulbs have an integral role in contributing the look, feel & mood of the room, the switches make sure to set on/off primarily & also consider the energy management of the home.

\n\n

So while these two distinct products advance, obviously the impact will be on their respective purposes only. I mean to say a smart bulb has just come up with its expected outcome only, like giving an more than hundreds of color variations & the user liberty to change/modify based on the situations.

\n\n

On the other hand, smart switches have taken a great leap in controlling the entire home appliances from a 40 watt (existing bulb) to high amps AC, washing machine, iron box. Instead of smartifying the individual electronic products by their respective companies & coming up with lots of individual apps, I would say smart switches are ahead of all these.

\n\n

My conclusion is, there's no point in mixing up these smart bulbs & smart switches. When you install smart switches, the entire home would be automated & then buy a smart bulb for your needed rooms. Even if you control the dimming or color variation using a smart switch, it can be at least switched off from anywhere which is what its destined to do.

\n\n

For smart bulbs, I guess Philips is the best & for smart switches Curiousfly has come up with a complete home solution:

\n\n

https://www.youtube.com/watch?v=TmFBNNAAFc0&t=6s

\n" }, { "Id": "585", "CreationDate": "2017-01-04T14:03:10.023", "Body": "

I have a 3-speed ceiling fan with integrated dimmable lights which when installed prior to my owning the house was fitted with an infrared wireless receiver.

\n\n

For a long time the remote did not work (poor quality construction) and after repairing it and having a few months of use have since found the receiver now isn't working (I've tried all the jumper combinations).

\n\n

So I now seek to replace the wireless receiver with one that supports a home automation system rather than just another infrared like-for-like replacement. My existing remote control looks like this:

\n\n

\"Infra-red

\n\n

Has anyone come across a home-automation system with wireless receiver (ideally compatible with at least one of the major brand solutions such as Z-wave, ZigBee, LightwaveRF, Smartwares etc, but honestly I'd consider anything) for a 3-speed ceiling fan with integrated dimmable lights?

\n\n

I've seen an Insteon Ceiling Fan and Light Controller module available for U.S. ceiling fans running on 120V AC but they don't seem to have a product for 240V AC as required for UK models.

\n\n

I've also seen a Qubino Flush Dimmer which could control the dimmable lights and their website says a second unit of the same would also be suitable for controlling the fan but I'm not certain of this aspect since it is a 3-speed fan (i.e. off, slow, medium, fast), not just variable speed (I can't set it to 62% speed for example).

\n\n

What I don't know however is whether this 3-speed setup is a limitation of the current wireless infra-red receiver/controller or of the fan unit itself, and this is where I find myself a bit out of my depth. The ceiling fan unit looks similar to this, except with no pull-cords for operation:

\n\n

\"Ceiling

\n\n

I'm really trying to establish here whether such a part even exists, or how else I might be able to integrate the unit with a combination of parts.

\n\n

Any help appreciated thank you.

\n\n

For further clarification: This model has a wireless-only controller, there is no wall-mounted controller in addition to the remote control. The wall switch is just a fused spur, then the wiring from the fused spur to the ceiling fan unit is just standard UK shielded twin and earth. There are no controls for the lights or fan except for on the remote control. The remote control is a Hampton Bay UC7080T.

\n", "Title": "UK-compatible (240VAC) receiver/controller for ceiling fan with dimmable lights", "Tags": "|smart-home|hardware|", "Answer": "

The good news for you is that the choice of fan-speed and lighting control is often independent of the actual ceiling fan itself. Most lighted ceiling fans operate using two independent circuits; the wall switch will have one hot wire that delivers power to the fan motor, and a separate hot wire that delivers power to the light fixture. (This is not counting any ground wires or neutral wires. This also may not be true for some digitally controlled fans, so you'd have to inspect the wiring in your wall switch to confirm. You should consult a qualified electrician before continuing.)

\n

Once you've confirmed that you have two separate wires leading from your wall switch to control your fan and lights, you can start the task of replacing your broken controller. Unfortunately, buying a light dimming switch has gotten kind of complex with the introduction of new lighting technologies.

\n

The light fixture circuit can be controlled using a dimmer switch that is rated for the type of lamps you have. Incandescent light bulbs are very simple "resistive loads", and can be dimmed using virtually any of the common technologies. LED light bulbs are electronic circuits, and often require a special dimmer switch designed to work with LED bulbs. Low voltage halogen bulbs use a transformer to lower the voltage; to dim these requires a dimmer switch capable of controlling an "inductive load" (the transformer is an inductor.)

\n

Most fan manufacturers recommend not using an ordinary dimmer switch, but instead state that the fan motor should be controlled by a "fan motor controller". That's because most fan motors are "inductive motors" and require a speed controller that is designed for inductive loads - the same kind of load controller that halogen lighting requires. The Qubino site says it is designed to control all kinds of loads, including low-voltage halogen lighting transformers, so their controller should work with your fan motor. (I recommend asking their support people to confirm that it would work with your model of fan.)

\n

As a plus, if you put in a dimmer, you will have a truly variable speed fan. You won't be limited to the three speeds the manufacturer's controller provided.

\n

UPDATE: I just (Sept 2020) purchased an Inovelli Fan & Light Switch (LZW36). It contains a wall switch with two separate buttons (one for fan, one for lighting), and an electronic fan/light controller module that is to be mounted in the ceiling at the fan. It's fully Z-wave enabled, and the wall switch nicely fits in the existing single-gang box (which I can't change due to the wall's construction.) According to reviews posted on their site, some customers are reporting success replacing existing poor quality remotes and controllers. Unfortunately, the company is brand new and I think they are selling only 120VAC products at this time.

\n" }, { "Id": "593", "CreationDate": "2017-01-04T16:03:04.280", "Body": "

Symantec is releasing a new router, the Norton Core, which they describe as \"The secure router for your connected home.\"

\n\n

I found out about this while reading an article on Engadget, but their description of what the device actually does and how it's better than a normal router isn't particularly great.

\n\n

What advantages does the Norton Core have over a regular router as far as improving the security of a simple smart home setup?

\n", "Title": "How does the Norton Core increase the security of a \"smart home\"?", "Tags": "|smart-home|security|routers|norton-core|", "Answer": "

It doesn't have an open telnet port. I still come across lots of residential / off-the-shelf wifi routers with open telnet ports. The firmware is usually available online too, with backdoor accounts. If you are renting your wifi router from your ISP provider, it probably has an open telnet port. Wifi routers usually have weak security and their firmwares are available online too, which can easily be examined with binwalk. If you have a Netgear router, have fun googling \"netgear geardog\"

\n" }, { "Id": "601", "CreationDate": "2017-01-04T19:23:14.037", "Body": "

I've been reading over the web about the history of the Internet of Things, and one of the most interesting things I have run across is the Carnegie Mellon University's Coke machine. According to various articles I have read, including this one from ewahome.com, it was a Coke machine that was designed to be able to tell people whether cold Coke was available in the University's Coke Machine.

\n

I am curious, however, as to what connection protocol would have been used back then for this machine. Were they sending the signals through telephone cables, or what? How did they go about sending the signal up to the various people who wanted information about the Coke?

\n", "Title": "What connectivity protocol did the Carnegie Mellon University's Coke Machine use?", "Tags": "|protocols|", "Answer": "

The Coke Machine, rather amusingly, has its own website with a bit more information on its history.

\n\n

The Ancient History document explains how the original Coke Machine operated:

\n\n
\n

The final piece of the puzzle was needed to let people check Coke\n status when they were logged in on some other machine than CMUA. CMUA's\n Finger server was modified to run the Coke status program whenever\n someone fingered the nonexistent user \"coke\". (For the uninitiated,\n Finger normally reports whether a specified user is logged in, and if\n so where.) Since Finger requests are part of standard ARPANET (now\n Internet) protocols, people could check the Coke machine from any CMU\n computer by saying \"finger coke@cmua\". In fact, you could discover the\n Coke machine's status from any machine anywhere on the Internet! Not\n that it would do you much good if you were a few thousand miles away...

\n
\n\n

For the first generation Coke Machine, in the 70s and 80s, the finger command was (ab)used while connecting through ARPANET, the precursor of the Internet. Not exactly a complex protocol, but it worked well enough to indicate the state of the coke machine without being overly difficult to set up.

\n\n

If you're interested in exactly how the finger command worked, here is an extract from Wikipedia detailing how it operates:

\n\n
\n

The finger daemon runs on TCP port 79. The client will (in the case of remote hosts) open a connection to port 79. An RUIP (Remote User Information Program) is started on the remote end of the connection to process the request. The local host sends the RUIP one line query based upon the Finger query specification, and waits for the RUIP to respond. The RUIP receives and processes the query, returns an answer, then initiates the close of the connection. The local host receives the answer and the close signal, then proceeds closing its end of the connection.

\n
\n\n

The finger command can also provide some custom information, such as full name, email address, and some custom text. Presumably the custom text was used to send the state of the Coke Machine and the coldness of the Cokes inside.

\n" }, { "Id": "635", "CreationDate": "2017-01-05T16:57:43.183", "Body": "

I recently found out about Android Things, Google's platform for developing an IoT device on top of the Android system.

\n\n

An InfoQ article suggests that the updates Google provide to Android Things will automatically be pushed to devices:

\n\n
\n

Certified hardware will come with system images provided by Google, including future updates that are automatically delivered without developer\u2019s intervention.

\n
\n\n

However, past experience with Android phones suggests that this is likely to lead to breakage unless the developer reviews the update and approves it before sending it out to consumers.

\n\n

Is it true that Google will be pushing updates to IoT devices using Android Things without the device developers verifying that it works? Is this likely to cause breakage?

\n", "Title": "Are Android Things updates going to be delivered automatically?", "Tags": "|over-the-air-updates|android-things|", "Answer": "
\n

Is it true that Google will be pushing updates to IoT devices using Android Things without the device developers verifying that it works?

\n
\n\n

Yes, for Android Things devices Google plans to push upgrades continuously without any device developers verifying it.

\n\n
\n

Is this likely to cause breakage?

\n
\n\n

It supposedly is being achieved by ensuring that the device-developer apps and the BSP vendor libraries interact with Android Things OS software components via an API contract only. If and when, the BSP vendor libraries require an update these are upgraded pushed as part of the OS upgrade package.

\n\n

It seems, Google is testing each upgrade in its labs for the first set of devices. This is more close to the Chrome OS model of upgrade and maintenance.

\n" }, { "Id": "636", "CreationDate": "2017-01-05T17:05:28.790", "Body": "

In my ongoing endeavors to get my Raspberry Pi to command my stuff I set up a Mosquitto MQTT broker. In the base settings everything went reasonably fine.

\n\n

I could post test messages with the publish command and receive them with the subscribe command. Then I decided to up the log level and modified the mosquitto.conf file as follows. The version with essentially the whole log section commented out works. The other doesn't.

\n\n

I narrowed it down to the line with the log file.\n

\n\n
$ diff mosquitto.conf mosquitto.conf.old\n408,410c408,410\n< #log_dest file /var/log/mosquitto/mosquitto.log\n---\n> log_dest file /var/log/mosquitto/mosquitto.log\n
\n\n

The file exists and is owned by mosquitto:mosquitto, the user which runs the service.

\n\n

The very helpful message I do get when trying with the logging is the following:

\n\n
mosquitto_pub -h localhost -t thisisme -m 5\nError: Connection refused\n
\n\n

By now, I'm sure that the service dies a silent death.

\n\n
$ sudo service mosquitto status\n\u25cf mosquitto.service - LSB: mosquitto MQTT v3.1 message broker\n   Loaded: loaded (/etc/init.d/mosquitto)\n   Active: active (exited) since Fri 2017-01-06 11:16:38 CET; 4min 24s ago\n  Process: 2222 ExecStop=/etc/init.d/mosquitto stop (code=exited, status=0/SUCCESS)\n  Process: 2230 ExecStart=/etc/init.d/mosquitto start (code=exited, status=0/SUCCESS)\n\nJan 06 11:16:38 T-Pi mosquitto[2230]: Starting network daemon:: mosquitto.\nJan 06 11:16:38 T-Pi systemd[1]: Started LSB: mosquitto MQTT v3.1 message broker.\n
\n\n

I'm running Raspbian GNU/Linux 8 (jessie) with the following mosquitto packages:

\n\n
libmosquitto1/stable,now 1.3.4-2 armhf [installed,automatic]\nmosquitto/stable,now 1.3.4-2 armhf [installed]\nmosquitto-clients/stable,now 1.3.4-2 armhf [installed]\npython-mosquitto/stable,now 1.3.4-2 all [installed]\n
\n\n

Further comment requested information:

\n\n
ls -ld /var /var/log /var/log/mosquitto /var/log/mosquitto/mosquitto.log\ndrwxr-xr-x 11 root      root       4096 Sep 23 06:02 /var\ndrwxr-xr-x  8 root      root       4096 Jan  6 21:07 /var/log\ndrwxr-xr-x  2 mosquitto mosquitto  4096 Jan  5 14:36 /var/log/mosquitto\n-rw-r--r--  1 mosquitto mosquitto 14233 Jan  6 21:07 /var/log/mosquitto/mosquitto.log\n
\n\n

The only log file in /var/log that gets modified is the auth.log from my sudo.

\n\n

What did I break?

\n\n\n", "Title": "Mosquitto on Raspberry Pi refuses connection after changing log settings", "Tags": "|mqtt|raspberry-pi|mosquitto|", "Answer": "

One way to debug this would be to run mosquitto manually with the same options as your init system is using, then look at the output. For example:

\n\n
mosquitto -v -c <path to config file>\n
\n\n

Adding -v will ensure that you have verbose logging, regardless of the config file settings.

\n" }, { "Id": "639", "CreationDate": "2017-01-05T18:51:56.360", "Body": "

Amazon Echo contains multiple good microphones. Is it possible to link them to my PC so that I can use the microphone with software like Skype?

\n", "Title": "Is it possible to use Amazon Echo as a normal bluetooth microphone for a PC?", "Tags": "|amazon-echo|microphones|", "Answer": "

Nope.

\n\n

There are currently just two Bluetooth profiles supported.

\n\n
\n

Advanced Audio Distribution Profile (A2DP)
\n This profile allows you to stream audio from your mobile device (such as a phone or tablet) to Echo.

\n \n

Audio / Video Remote Control Profile (AVRCP)
\n This profile allows you to use hands-free voice control when a mobile device is connected to your Echo.

\n
\n\n

(Amazon Support Page)

\n" }, { "Id": "651", "CreationDate": "2017-01-06T14:06:02.293", "Body": "

I'm making an IoT device that will serve a web app over WiFi which can be accessed to control it.

\n\n

I would like to make it easy to set up. For example, the easiest way I can imagine is as follows; all it would need is a phone or similar with NFC capabilities. (Only hypothetically, because this assumes NFC etc can do it!)

\n\n
    \n
  1. User powers up IoT device
  2. \n
  3. User holds phone against IoT device's NFC pad
  4. \n
  5. IoT device asks phone for WiFi credentials
  6. \n
  7. IoT device uses credentials to connect to WiFi
  8. \n
  9. IoT device directs phone's browser to its URL
  10. \n
\n\n

But right away I can see possible flaws:

\n\n\n\n

Obviously some users will not have NFC-compatible phones, so there would also have to be a secondary method.

\n\n

The only awareness of a solution I have comes from how my WiFi IP security camera works. It requires first connecting it via Ethernet cable to a router with on a 192.168.1.X subnet with a given IP reserved (e.g. my camera required 192.168.1.100 to be reserved or free). Then from there, the user navigates to http://192.168.1.100/, logs in with the camera's supplied username and password, then from there, configures the camera with the WiFi access point name and password.

\n\n

But that method had one serious disadvantage: it required that the router operates on the subnet 192.168.1.X. Mine operated on 192.168.0.X. Thankfully I was able to reconfigure it. But my new router doesn't have that ability!! I would have been stuck. Additionally, the above method is quite a pain; quite a few steps.

\n\n

What other solutions have been implemented to solve the problem of setting up an IoT device's WiFi connection, and then informing the user of its IP address so he/she can access its web interface?

\n", "Title": "How can I easily configure Wi-Fi on a smart device without a screen?", "Tags": "|communication|wifi|nfc|", "Answer": "

I am glad that you got other answers, because NFC is probably the wrong technology for this.

\n\n

Your phone reads NFC tags and acts upon them; no request to the \u2018phone, and no to and fro communication.

\n\n

So, at best, you could tag the device \u2013 with a URL. When the phone taps the device, it is redirected to a web page which allows the user to visually configure and then instructs the device non-visually on the new configuration.

\n\n

It\u2019s not difficult, but I would recommend one of the other answers. I am posting this only to offer another option to you and any future searchers of this question.

\n\n
\n

Obviously some users will not have NFC-compatible phones, so there would also have to be a secondary method.

\n
\n\n

Indeed :-)

\n" }, { "Id": "664", "CreationDate": "2017-01-06T21:01:27.100", "Body": "

With Perch, it seems that you can use an old Android smartphone with a camera as a webcam, which seems like a great idea to reuse old devices that you no longer want.

\n\n

However, in normal usage, phones tend to use a lot of energy in my experience (some phones barely last a day on battery in normal usage!), and the camera seems to use even more than usual, so I'm concerned that there may be a lot of power consumption.

\n\n

If I reused a Samsung Galaxy S3 with Perch, is it likely to cost more than just buying a webcam and connecting it to my network, or is the difference negligible?

\n\n
\n\n

In regards to what I'm comparing, I'm interested in whether (cost of webcam + electricity) would be cheaper than (cost of electricity for phone). If the webcam only pays for itself after 5 or 10 years, I'm not too concerned and won't bother buying it, but if I'm going to see savings after 6 months it might be more valuable.

\n\n

As far as I can tell, Perch just runs the camera 24/7 and communicates via Wi-Fi, but I've turned Bluetooth and mobile data off, along with GPS, since I don't need them.

\n\n

As requested by Mawg, an example of the sort of thing the D-Link DCS-932L, which says the maximum power consumption is 2W on the datasheet. There isn't any information on the typical power consumption, however, so an answer with some explanation on what might typically be expected could be helpful.

\n\n

I'm not particularly interested in any additional features such as low-light vision, although it would be a bonus. Motion detection would be useful though, since Perch does seem to support this by default and it would be advantageous. Notifications of motion would be good as well.

\n\n

The camera will be situated inside, so I wouldn't expect any temperature extremes or dampness (otherwise I have a much bigger problem!).

\n\n

When charging, I would be using a standard 5W USB charger for the phone, which would be connected all the time.

\n", "Title": "Will reusing a mobile phone as a smart webcam be cheaper than buying a dedicated webcam?", "Tags": "|power-consumption|digital-cameras|microprocessors|", "Answer": "

If your 5W charger can handle the demands, it will use three watts more than the spec of 2W for the cam you linked to. Every two weeks, those three extra watts will use one kilowatt-hour, which I'll guess costs you fifteen cents, or about four (US) dollars annually. A bit of Google-fu found the D-Link DCS-932L delivered for US$ 37. Crunch the numbers, and you are looking at more than nine years before you break even. One consideration you didn't mention was how the phone (cam) will be mounted and aimed. A nicely framed and usable camera angle can be tricky to get without the right equipment. If you're interested in a solution for almost no money, ask, and I'll describe how.

\n" }, { "Id": "678", "CreationDate": "2017-01-07T18:37:48.760", "Body": "

The wrist-worn devices generally don't measure the heart rate precisely enough to measure heart rate variability (HRV). Does the \u014cura ring measure heart rate precisely enough to get quality HRV data?

\n", "Title": "Does the \u014cura ring measure heart rate well enough to assess HRV?", "Tags": "|sensors|heart-rate-monitors|oura-ring|", "Answer": "

Alright, I guess I can't comment but I can write a separate answer.

\n\n

I looked a bit into the answer by Bence and summarized some more details below.

\n\n

Current links since his are out of date now:

\n\n

https://help.ouraring.com/sleep/validity-of-the-oura-ring-in-determining-sleep-quantity-and-quality

\n\n

https://ouraring.com/wp-content/uploads/2017/08/Validity-of-the-OURA-Ring-in-determining-Sleep-Quantity-and-Quality-2016.pdf

\n\n
\n

Epoch by epoch comparison of the sleep stages determined by the \u014cURA ring and manually scored based on Polysomnography. Sleep stages were classified into Wake, REM, Light and Deep by both methods. The \u014cURA ring was worn on the nondominant hand. The \u014cURA ring displays 65.3% agreement, Cohen\u2019s\n kappa 0.449.

\n
\n\n

The results are improved if you combine light and deep into a non-REM group and results are worse if you wear it on your dominant hand.

\n\n

Some caveats to consider:

\n\n
    \n
  1. Low sample size (n=14)
  2. \n
  3. Funded directly by Ouraring (if you think this could bias the study)
  4. \n
\n\n
\n

Fourteen subjects volunteered to participate in this study. Eight full\n polysomnograph recordings (four female and four male) and six EOG\n only recordings (two female and four male) were made. The recording\n and subsequent analysis was carried out by the sleep laboratory of the\n Finnish Occupational Health Institute, Helsinki, Finland1), an independent\n research institute. The research was funded by Ouraring.

\n
\n" }, { "Id": "684", "CreationDate": "2017-01-07T20:01:52.837", "Body": "

The FTC have filed a legal complaint against D-Link alleging that their routers and IP cameras have critical security vulnerabilities and are misleading consumers into believing they are safe.

\n\n
\n

Defendants have failed to take reasonable steps to protect their routers and IP\n cameras from widely known and reasonably foreseeable risks of unauthorized access, including\n by failing to protect against flaws which the Open Web Application Security Project has ranked\n among the most critical and widespread web application vulnerabilities since at least 2007.

\n
\n\n

I did a bit more research and found the response from D-Link, saying:

\n\n
\n

What D-Link Systems products are impacted?

\n \n

The FTC has made vague and unsubstantiated allegations relating to routers and IP cameras. Notably, the complaint does not allege any breach of any product sold by D-Link Systems in the US.

\n \n

Is there any security concern for current products?

\n \n

The FTC does not allege any breach of any product sold by D-Link Systems.

\n
\n\n

It's clear that D-Link don't want to admit that their devices are insecure, from that response. The list of the security vulnerabilities is already provided by the FTC, which is helpful, but they don't actually provide a list of which products specifically are vulnerable.

\n\n

Which products are affected by the complaint, and what actions (if any) can I take to protect my devices and home network?

\n", "Title": "Which D-Link IP cameras are affected by the FTC complaint, and what can I do about them?", "Tags": "|security|dlink|", "Answer": "

I have a D-Link 5020-L IP Camera.

\n\n

Even without D-Link's soft, the messages are transmitted via plain HTTP and a basic autorization header. Which means that if somebody sniffs my home network they can easily get my login/password. And yes, if my router has opened ports my login/password is base64 encoded in the basic header so very easy to read.

\n\n

As for the document I see two parts for what I have read at the beginning:

\n\n\n\n

With my first point, I would say \"all the devices\" are impacted since the user must enable security. So, to answer your question: how?

\n\n\n" }, { "Id": "686", "CreationDate": "2017-01-08T06:41:00.943", "Body": "

I am planning to measure water level in a well, which is about 10 m deep with maximum water level up to 5 m. My plan is to use ultrasonic sensor HC SR04 to measure depth, transmit it via ZigBee to a Raspberry Pi inside my home.

\n\n

My question is how best to connect HC SR04 to a ZigBee device? Since this sensor will be located inside a well, using minimum parts with lowest power usage would be ideal.

\n", "Title": "Connecting a sensor to ZigBee", "Tags": "|raspberry-pi|sensors|zigbee|", "Answer": "

Generally you'd need some component to trigger and power the sensor and read the response. That sensor has a custom response and trigger which makes me doubt there is a standard ZigBee module out there which converts a command to that 10 \u00b5s trigger and reports back the response in verbatim. Thus, you'll need some sort of microcontroller with your ZigBee module to perform that task.

\n\n

I'd probably get that microcontroller, the ZigBee module and a circuit prevent short circuiting on a board outside the well for humidity reasons and lead a four wire cable inside to the sensor. Since the sensor has only four meters of range it has to be very close to the potential high water maximum mark. Putting a small cable inside the well gets the other electronics out of range and puts the ZigBee module in a better position to relay the information to the Raspberry Pi.

\n\n

Of course, you can also put the MCU, the ZigBee module and the sensor in a water-proof casing inside the well. Which might give the ZigBee module problems though. However that depends a lot on your building.

\n" }, { "Id": "688", "CreationDate": "2017-01-08T08:17:53.077", "Body": "

I am running emqttd (emqtt.io). I would like to monitor clients connecting and disconnecting from a separate process that would be subscribing to a system topic where birth and will messages are posted. What is the right way to do that?

\n", "Title": "Subscribing to MQTT birth and will topics? (emqttd)", "Tags": "|mqtt|emq|", "Answer": "
\n

I would like to monitor clients connecting and disconnecting from a separate process that would be subscribing to a system topic where birth and will messages are posted.

\n
\n\n

emqtt's User Guide shows a system topic that offers some Broker Statistics, i.e. $SYS/brokers/${node}/stats/clients/count provides the count of current connected clients. Note that this will not list any specifics about the connected clients - so I take it that is not what is needed here.

\n\n

Last will (LWT, Last Will and Testament) messages are otoh not a system topic but a regular topic as set up by the client during connect. If you want to monitor that topic simply subscribe to it. Note however that LWT messages are discarded if a client disconnects gracefully by sending a DISCONNECT message (see hiveMQ blog, great read btw).

\n\n

emqtt's User Guide presents a better way to monitor connecting and disconnecting clients:

\n\n
\n

The ./bin/emqttd_ctl command line could be used to query and administrate the EMQ broker (not working on Windows).

\n
\n\n

I think that monitoring clients list - list all MQTT clients - and clients show <ClientId>- show a MQTT Client - are most helpful here. The planned separate process to monitor clients therefore does not need to subscribe to the broker but simply utilize ./bin/emqttd_ctl instead.

\n\n
\n
    $ ./bin/emqttd_ctl clients list\n    Client(mosqsub/43832-airlee.lo, clean_sess=true, username=test, peername=127.0.0.1:64896, connected_at=1452929113)\n    Client(mosqsub/44011-airlee.lo, clean_sess=true, username=test, peername=127.0.0.1:64961, connected_at=1452929275)\n
\n
\n" }, { "Id": "695", "CreationDate": "2017-01-08T13:43:13.900", "Body": "

According to The Register, lots of Amazon Echo devices were accidentally triggered by a presenter saying 'Alexa ordered me a dollhouse'.

\n
\n

Telly station CW-6 said the blunder happened during a Thursday morning news package about a Texan six-year-old who racked up big charges while talking to an Echo gadget in her home. According to her parents' Amazon account, their daughter said: "Can you play dollhouse with me and get me a dollhouse?" Next thing they knew, a $160 KidKraft Sparkle Mansion dollhouse and four pounds of sugar cookies arrived on their doorstep.

\n

During that story's segment, a CW-6 news presenter remarked: "I love the little girl, saying 'Alexa ordered me a dollhouse'."

\n

That, apparently, was enough to set off Alexa-powered Echo boxes around San Diego on their own shopping sprees. The California station admitted plenty of viewers complained that the TV broadcast caused their voice-controlled personal assistants to try to place orders for dollhouses on Amazon.

\n
\n

Voice purchasing seems to be enabled by default on the Echo, if you have 1-Click Purchasing set up.

\n

How can I stop Alexa from ordering things if an advertisement or TV show says the words "Alexa, order ____"?

\n

Will I need to disable voice purchasing altogether, or is there another way of making Alexa only respond to my orders?

\n", "Title": "How can I stop Alexa from ordering things if it hears a voice on TV?", "Tags": "|security|alexa|amazon-echo|", "Answer": "

There's a built in feature in Alexa to prevent exactly this. Go to your app, click settings, alexa account, and then recognized voices.

\n

Select "your Voice" and follow the prompts.

\n

Hey presto. As you need more people to be able to order things, add their voices as well.

\n

Here's what a quick Google search turned up: How to Train Amazon's Alexa to Recognize Your Voice

\n" }, { "Id": "703", "CreationDate": "2017-01-08T16:54:53.897", "Body": "

Cortana is Microsoft's intelligent personal assistant for Windows Phone 8.1, Microsoft Band, and Windows 10.

\n\n

I am interested in how can someone turn on their Xbox One by using Cortana voice command. Unfortunately when I was searching in the topic I only found articles about how to turn on Cortana on the Xbox itself. In my case Cortana should listen on my PC running Windows 10.

\n\n

If possible I want to avoid serious scripting and such for first and I hope that there is a more sophisticated solution, if only because all of these are Microsoft products.

\n", "Title": "How to turn on Xbox One from Windows 10 PC using Cortana?", "Tags": "|microsoft-windows|wake-on-lan|microsoft-xbox|cortana|", "Answer": "

Without Cortana

\n\n
    \n
  1. By using the Xbox button on your controller if your controller is paired to your Xbox One.

  2. \n
  3. By using the official Windows 10 Microsoft Xbox app. To make this work, you have to connect your Xbox One to the Windows 10 app by following these steps. If streaming works, your Xbox One is connected to your Windows 10 Xbox app. Now shutdown your Xbox One. You should still see your Xbox listed in the Windows 10 Xbox app. It now should also offer you an option to turn on your Xbox One see screenshot below), and there you have it! :)

  4. \n
\n\n

Notice: I only tested this on my Xbox One connected by LAN, Xbox One configured in high energy mode and with the Windows 10 Xbox app version 38.38.14002.00000. Comments if this works on WiFi and on low energy settings are welcome.

\n\n

A screenshot (in dutch) of what this looks like:

\n\n

\"enter

\n\n

With Cortana

\n\n

I don't think Cortana supports waking your xbox by using your voice nativly so you need a 3th party script or app. The script or app should use the wake-on-lan protocol and target your xbox one MAC adres to wake it. This guy has a video about cortana waking his pc, it should also work with your xbox one.

\n\n

Another option might want to look into is using IFTT with the Cortana integration and some other wake-on-lan integration. This is an example to wake your xbox one by using Google Assistant. You might create your own working applet without any scripting at all.

\n" }, { "Id": "726", "CreationDate": "2017-01-09T17:47:55.470", "Body": "

I've been wondering how an MQTT client could instruct the broker to disconnect a client by some means, in case I need to force-disconnect a client from my MQTT server (for example, if it's misbehaving somehow and not responding correctly).

\n\n

A previous question highlighted the possibility of just connecting with the same client ID as the client you want to kill, but this seems unreliable at best and I'm wondering if there's a more reliable option that will meet my needs:

\n\n\n\n

Is there a feature that meets such requirements?

\n", "Title": "Can a Mosquitto MQTT client forcibly disconnect another?", "Tags": "|mqtt|mosquitto|", "Answer": "

Not directly.

\n\n

You could use an authentication plugin such as mosquitto-auth-plug to dynamically add users to a banned list and then force a disconnection by connecting with a duplicate client id.

\n" }, { "Id": "736", "CreationDate": "2017-01-10T17:48:30.683", "Body": "

The majority of smart assistant brands, such as the Amazon Echo and Google Home, offer very little in the way of customisation for the wake word (the phrase you use to wake up the device so it listens to you).

\n\n

For example, Alexa only offers three choices and Google Home only supports 'OK Google'. A lot of users seem to be interested in the idea of custom wake words, yet none of the major brands have added support.

\n\n

Is there any technical reason for restricting wake word customisation, or is it simply a branding choice?

\n\n

I've read about Google's motivation for using 'OK Google', which suggests the branding idea might be true, but it also seems that wake word recognition isn't very accurate, perhaps indicating a technical reason. Would anyone be able to clarify which factor is the main reason?

\n", "Title": "Why do most smart assistants offer little, if any, customisation of the wake word?", "Tags": "|smart-home|smart-assistants|", "Answer": "
\n

Is there any technical reason for restricting wake word customization

\n
\n\n

When assistant device is not in use, the application processor (I think ARM in case of Alexa as well as Google Home) is suspended and taken to lowest possible power state. The wake word detection is left to very power efficient DSP which listens to the ambient noise/voices and runs a algorithm to decide if there is a match to the wake word. If it finds a match with good amount of confidence DSP wakes up the ARM core to get going with rest of the processing.

\n\n

Now since the goal is to be power efficient the DSP in question runs the algorithm as well as stores the template pattern on the on-chip memory rather than the main on-board RAM. This allows system to even take the DDR RAM to lowest power state.

\n\n

Since the DSP has a number key things to do and very little on-chip memory the Assistant wake words are limited to a few choicest ones that can be matched by the algorithm with high degree of confidence.

\n" }, { "Id": "758", "CreationDate": "2017-01-12T12:41:17.787", "Body": "

As a result of this question I have read some articles about Alexa and its wake-words. One of the articles mentions the following:

\n\n
\n

Finally, for people with multiple Echo units, there\u2019s an argument to be made for multiple wake words. The microphone array on the Echo and Echo Dot units is very sensitive. If you have an Echo in your living room and a Dot upstairs in your bedroom, there\u2019s a good chance that issuing a command to Alexa while standing in the foyer will trigger both units. In such instances, it\u2019s really handy to have one wake word for the downstairs unit and one wake word for the upstairs unit.

\n
\n\n

Now it says that:

\n\n
\n

The microphone array on the Echo and Echo Dot units is very sensitive. If you have an Echo in your living room and a Dot upstairs in your bedroom, there\u2019s a good chance that issuing a command to Alexa while standing in the foyer will trigger both units.

\n
\n\n

So then why would I need more of them when one can cover an average home? What are the possible use-cases that reasons having multiple Alexas at home?

\n\n

Also to extend coverage it could be enough to use a simple wireless microphone unit connected to Alexa.

\n", "Title": "Why would I need multiple Alexa devices in one home?", "Tags": "|alexa|amazon-echo|", "Answer": "

One reason would be for a large house. We have 3 Alex devices, in the Master bedroom, Living Room and Lounge.

\n\n" }, { "Id": "764", "CreationDate": "2017-01-12T20:32:47.543", "Body": "

This article claims that using a blockchain-based security system for an IoT network would prevent some types of attacks:

\n
\n

Blockchain technology may help offer an answer. Gada observes that blockchain offers inherent security not present in current, traditional networks. \u201cBlockchain technology is seen as a way to add security and privacy to sensors and devices,\u201d he states. \u201cIn traditional IT architectures, tampering can occur if a hacker is able to get through firewalls and other defenses built up by an organization. Once inside, tampering is often not recorded or noticed, and can occur unimpeded. This is simply not possible when using blockchain.\u201d

\n

Blockchain, Gada explains, is \u201ca suitable solution in at least three aspects of IoT, including big data management, security and transparency, as well as facilitation of micro-transactions based on the exchange of services between interconnected smart devices.\u201d

\n
\n

This seems like a bold, yet rather vague claim. How, exactly, would a blockchain system provide such protection to a network of connected devices? Are the benefits mainly due to the improved transparency of the public blockchain, or are there other benefits too?

\n", "Title": "Could a blockchain really prevent malware in the Internet of Things?", "Tags": "|security|networking|blockchains|", "Answer": "

Yes. IoT devices (e.g. wifi thermostat) with no open/listening ports, such as telnet or http, usually dial into a central server, and stay connected to that server 24/7. When you are abroad, the thermostat app on your smartphone contacts the same CENTRALIZED server when you want to change the temperature, and that server relays the command back to the thermostat at your house. If hackers compromise that server, they have control of the thermostats. It was in Mr. Robot season 1. You can use certain blockchains, such as Ethereum, to interact with an IoT device, instead of a using a centralized server. An example is the SlockIt ethereum door lock

\n" }, { "Id": "784", "CreationDate": "2017-01-13T19:58:33.807", "Body": "

In MQTT, messages with QoS 1 or 2 must be delivered at least once (QoS 2 messages must be delivered exactly once). If the client is not connected, the broker must store the message until the client is ready to receive it.

\n\n

The HiveMQ blog has an interesting point:

\n\n
\n

But what happens if a client does not come online for a long time? The constraint for storing messages is often the memory limit of the operating system. There is no standard way on what to do in this scenario. It totally depends on the use case. In HiveMQ we will provide a possibility to manipulate queued message and purge them.

\n
\n\n

Since this seems to be dependent on the broker, how does Mosquitto handle this situation? Does it just crash after running out of memory or are old messages finally purged?

\n", "Title": "What happens if Mosquitto runs out of memory to store QoS 1/2 messages?", "Tags": "|mqtt|mosquitto|", "Answer": "

Messages are persisted to disk not just held in memory.

\n\n

Look at the autosave_interval and autosave_on_change options for when the messages get written to disk.

\n\n

Source

\n" }, { "Id": "786", "CreationDate": "2017-01-13T21:01:56.710", "Body": "

The EMQ (Erlang MQTT Broker) is a \"distributed, massively scalable, highly extensible MQTT message broker\" with a reported \"1.3 million concurrent MQTT connections\" - so it potentially allows a large number of clients to publish and subscribe to it. It seems likely that some clients may be disconnected at any given time.

\n\n

As this question What happens if Mosquitto runs out of memory to store QoS 1/2 messages? asks about Mosquitto:

\n\n
\n

In MQTT, messages with QoS 1 or 2 must be delivered at least once (QoS 2 messages must be delivered exactly once). If the client is not connected, the broker must store the message until the client is ready to receive it.

\n
\n\n

So how does EMQ persist QoS 1/2 messages until delivery, i.e. a reboot of the broker or with respect to memory limits?

\n", "Title": "How does EMQ persist QoS 1/2 messages?", "Tags": "|mqtt|emq|", "Answer": "
\n

So how does EMQ persist QoS 1/2 messages until delivery, i.e. a reboot of the broker or with respect to memory limits?

\n
\n\n

The answer seems to be: it doesn't. This issue on their bug tracker says:

\n\n
\n

I am facing the issue to store persistent client sessions after broker restart. Does this feature currently not present in the broker or I missing some configuration?

\n \n

The broker will not persist sessions.

\n
\n\n

Also, after digging through a couple more issues, I found this report:

\n\n
\n

Initially I have set the max clients to 1000K in emqttd.config. Our machine has 8 GB memory with 4 core, I am able to connect 120K concurrent connection easily but when it exceeds 8 GB memory emqttd terminates itself. What I thought is set a max client per machine would be far better.

\n
\n\n

Essentially, as of v2.0.5:

\n\n\n\n

Not exactly ideal, but that seems to be the current behaviour, so if persisting messages is critical to your use-case, use a different broker.

\n" }, { "Id": "787", "CreationDate": "2017-01-14T10:06:26.940", "Body": "

I am trying to figure out if it is possible in any way for me to remotely connect to a device on my home network, but remotely.

\n\n

Scenario:

\n\n

My entertainment system is connected to the home network, and every now and then when I'm at work (50 km away) I would like to activate it (usually because family doesn't know how to operate it) using the Android App the device maker has which I have installed on my phone.

\n\n

Normally I would connect to Wi-Fi, and done app instantly connects to entertainment unit and starts operating.

\n\n

But in this scenario would like to know how to achieve the same but from 50 km away not on same home WiFi connection.

\n\n

I looked into DDNS but that didn't add up, as well as VPN but nothing adds up.

\n\n

Keen to learn how to make the whole smart home work (all my electronic devices at home are connected to wireless network).

\n\n

Router: Netgear D6400\nDevices: AVR-x1100w, (i got couple controllers as well I'm going to be installing for light fixtures and air con). They all have web interface too.

\n", "Title": "Connect to device at home network remotely", "Tags": "|smart-home|wifi|", "Answer": "

There are two ways that you can achieve this unless the devices you have in-home are configured to access an external server specifically to provide this function (most are).

\n\n

A VPN can be used to logically move your android device to inside your home network. It is possible (but unlikely) that your router provides this functionality. In practice, you need a device within your home network to act as a host for the VPN. I use a NAS device (which comes with the DNS features as well), but you could implement this yourself using a single board computer (such as a Raspberry Pi).

\n\n

Router Port Forwarding This is technically possible, but less likely to work in your case. It would work if the device in your house has a web interface, but doesn't work so easily if you have an app which you need to redirect from an external network. You can configure your router to pass an HTTP access on a special port (public_ip:12380) to port 80 on your entertainment device (192.168.1.xx:80). This would be OK (and easy) if you have a device running Kodi for example, this had a web remote-control.

\n\n

However, based on your asking the question, I'd say this is probably going to be very hard to set up, even using a NAS which supports a VPN isn't simple. It might be worth investigating if a device like Google-home can implement the link you need.

\n" }, { "Id": "796", "CreationDate": "2017-01-14T16:56:11.190", "Body": "

An awful lot of devices on the market for home automation have severe security flaws, such as hard-coded passwords (or no passwords at all!). To make it worse, it's hard to find information on the Internet about a device's security before purchasing, so it's easy to buy something only to find out that it's blatantly insecure.

\n\n

This article suggests that the only option in this case is to return the device or throw it away:

\n\n
\n

So, if the device has no password, or you can\u2019t change the password it uses, it\u2019s always going to be vulnerable to attack. You\u2019ll need to throw it out and buy a new one that was built with at least the ability to change its password.

\n
\n\n

I suspect it might be possible to reduce the risk by restricting access to my devices to certain remote IPs, although I'm unsure as to whether that would completely solve any future problems.

\n\n

The product I'm interested in particularly is the Raysharp DVR, which allegedly uses hard-coded passwords and offers no way of changing the authentication credentials. What actions can I take to prevent unauthorised access while still being able to use the camera from outside my home network?

\n", "Title": "What can I do if a smart DVR camera only supports default passwords?", "Tags": "|smart-home|security|", "Answer": "

There are two obvious points of entry for an attacker. One is the local network connection, the other is the WAN. Even if you defend against these, you need to ensure that there is nothing valuable which is known to the device since it provides an easy attack point for other devices.

\n\n

A wired connection for the local network is most secure, but only if it is isolated from the rest of your network. If you have to provide a WiFi connection, use a SSID dedicated to this camera, with unique secrets.

\n\n

In order to open access to the isolated, insecure network segment, you need to set up a VPN. Ideally, the VPN will be the only way to connect to this isolated segment. In practice, it will still be vulnerable to local attack.

\n\n

Be sure not to connect the DVR to any other shared resources on your LAN such as networked storage.

\n" }, { "Id": "804", "CreationDate": "2017-01-15T14:02:48.947", "Body": "

This whitepaper comparing MQTT and AMQP says that MQTT's username/password restrictions make it far less secure than AMQP:

\n\n
\n

MQTT requires short user names and short passwords that do not provide enough entropy\n in the modern world.

\n
\n\n

Section 3.1.3.5 of the MQTT specification says that passwords can be up to 65535 bytes of binary data. A quick calculation shows that this produces a ludicrously large number:

\n\n

\"Wolfram

\n\n

To put that into scale, if you could try one hundred trillion passwords a second, it would take approximately 1 x 1014 million years to exhaust the search space of any password between 1 and 65535 bytes.

\n\n

Therefore, I can only assume that the author is either incorrect, or that they were talking about a restriction that occurred previously but has now been lifted.

\n\n

Why would the author of that whitepaper say MQTT's passwords have insufficient entropy, and is it still the case, or are my calculations correct?

\n", "Title": "Why does this whitepaper say MQTT usernames/passwords \"do not provide enough entropy\"?", "Tags": "|security|mqtt|", "Answer": "

Version 3.1 of the spec suggested some unreasonably low username/password lengths:

\n
\n

It is recommended that user names are kept to 12 characters or fewer, but it is not required.

\n

It is recommended that passwords are kept to 12 characters or fewer, but it is not required.

\n
\n

I imagine that any sensible broker didn't implement this recommendation.

\n" }, { "Id": "812", "CreationDate": "2017-01-16T16:12:12.623", "Body": "

Please consider the following:

\n\n

I am looking to build a device that will be placed in a vehicle as it drives through heavy wooded areas, potentially far away from any mobile reception towers.

\n\n

The hope is for this device to be able to get its GPS location in near real time, and upload said information to either, another device, or best case: the internet, but the key point would for this to be as uninterrupted as possible.

\n\n

The device must not require any modification to the vehicle (attaching antenna to the roof etc) other than being powered/charged from a 12V cigarette lighter.

\n\n

Is my best option here to build an app and use a high quality mobile phone, or would it be to build a custom device with some kind of satellite internet, and high quality GPS module.

\n\n

If the custom device I would consider using a Raspberry Pi as the controller for it, can anyone suggest equipment that could be used in this?

\n\n

Many thanks,

\n\n

Edit

\n\n

I would also love to see any examples of similar projects for some brainfood.

\n", "Title": "Robust GPS + Internet - Best option?", "Tags": "|raspberry-pi|gps|", "Answer": "

For near-real time tracking of vehicles while out of cellular service range, Spot makes the Spot Trace, a vehicle tracker that uses satellite communication. It transmits only when the vehicle has moved or is in motion, and only as often as you configure it for, minimizing data traffic over the satellite link. It operates on alkaline batteries (for concealed placement) or vehicle power.

\n\n

If you just need a product and don't feel the need to build one, this is a pretty inexpensive way to meet your stated needs.

\n\n

If you still want to customize it further and/or build something yourself, Globalstar offers a variety of satellite and satellite+GPS communications modules.

\n" }, { "Id": "813", "CreationDate": "2017-01-16T17:38:27.927", "Body": "

I recently read a Quora question about whether CoAP or MQTT is more lightweight, but the answers don't seem particularly satisfying and all contradict each other: the top answer says MQTT takes fewer resources, and another below that says CoAP is less demanding.

\n\n

From what I've found, it would make sense that CoAP would be less demanding than MQTT, since CoAP only requires UDP, and its messages are mainly fire-and-forget, unlike MQTT which functions over TCP (and hence would be much more involved).

\n\n

Which protocol requires the least resources to function?

\n\n
\n\n

By resources, I'm primarily thinking of required processor power, RAM and data to be transmitted. For example, in the Quora question I linked, the top answer points out that a simple ESP8266 chip could run MQTT, which only has a 80MHz processor and less than 1MB of RAM. I'm curious as to whether CoAP could run on something like this, or an even more constrained environment.

\n\n

The sort of use case I'm envisaging is where the device would mostly be receiving data from another device (e.g. commands to switch on/off), but may need to infrequently (perhaps a few times an hour) send updates with the device's status. I'd like to use as little processing power as possible to reduce device costs, and transmit relatively infrequency to reduce power usage as much as possible.

\n", "Title": "Does CoAP have a lower footprint than MQTT?", "Tags": "|mqtt|communication|protocols|coap|", "Answer": "

One of the answers above says that CoAP uses UDP and hence does not guarantee data delivery. This is not completely true. CoAP has a non-confirmed mode and a confirmed mode of sending messages.\nIn confirmed mode, CoAP will do retries to try to ensure delivery. You can tune the timeout and retry attempt count.\nThe non-confirmed mode does not retry and is the 'fire-and-forget' mode you mention above.

\n

In terms of memory footprint, the code itself can take 5-200k depending on what library you use and what options you use. The runtime data use depends on what features you use such as confirmed mode, block transfer, security, named resources, etc.\nFor a tightly constrained trial, see https://www.mdpi.com/1424-8220/16/3/358 where the CoAP part of the code is about 5k !

\n" }, { "Id": "819", "CreationDate": "2017-01-17T16:23:38.377", "Body": "

I have a data logger board with a SIM808 on it. It has Bluetooth 3.0 capability by the SIM808. The board itself implements a battery management system, capable of performing weight, humidity and temperature measurements and also can detect device displacements. All collected data is transferred by GPRS connection to a remote server.

\n\n

The device itself can be installed into beehives, but it would not be cost effective to have a SIM card for hundreds of hives. So this will only act as a master, that have data logging capabilities as well beside the GPRS capability.

\n\n

Thus, I am planning to implement slave boards without the SIM808 modules. So instead of the SIM808, a simple wireless communication unit is needed to enable local, wireless communication between the hives.

\n\n

The master would query all the slaves for their data, and then it would transfer everything via GPRS.

\n\n

It should look like this, only with a hundred hives:

\n\n

\"enter

\n\n

Now the possibilities for local wireless communication:

\n\n
    \n
  1. Bluetooth, as I said the master device already have Bluetooth 3.0. But I am not entirely sure that Bluetooth is the right way to query a hundred slave for 1 kBs of data.
  2. \n
  3. The master device has an I2C bus, so I can connect I2C compatible ZigBee or other RF module which could be added to the slave boards as well.
  4. \n
\n\n

Collectable data from slaves won't exceed 1 kB/query.

\n\n

So all in all can I stay at Bluetooth or should I add ZigBee for example to my devices or are there any other options?

\n\n

Some more details:

\n\n\n\n

The main goal is to make the master able to query the slaves efficiently, and this should be done without modifying the PCB of the master. The two possibilities are Bluetooth 3.0, which is already available for the master, or other technologies that I can connect to the master board via the I2C bus of the on-board MCU. (I do not insist on using Bluetooth, it was the starting point because I already had a BT 3.0 by the SIM808.)

\n\n

\"enter

\n", "Title": "Is Bluetooth 3.0 suitable for a single-master multiple-slave network?", "Tags": "|networking|communication|bluetooth|", "Answer": "

Alternatively it might be worth to consider wireless Hart (Highway Addressable Remote Transducer). This is a 2.4GHz ( license free frequency band) Smart mesh networking technology that uses 802.15.4 standard. WHart use direct-sequence spread spectrum technology and needs at minimum three main components. Namely wireless devices, gateway and network manager.

\n\n

\"Wireless

\n\n

Click on image for a larger version of the image.

\n\n

Additionally depending on the network, security manager, adapters, and handheld terminals can be added.

\n\n

Dust network offers a SOC option and some of them have I2C interface. Attach below is a links to some of the datasheet. Unfortunately my knowledge on this technology is pretty limited thus warrant further research.

\n\n

References

\n\n
    \n
  1. LTP5901-IPM/LTP5902-IPM
  2. \n
  3. WirelessHART - How it works
  4. \n
\n" }, { "Id": "822", "CreationDate": "2017-01-17T20:29:45.223", "Body": "

I was wondering if SmartThings supports backing up the settings (or perhaps exporting it in a machine-readable format such as JSON or XML), so that if worst comes to worst, I won't lose my settings and configuration.

\n\n

I read this thread which suggests that there is no official tool as of May 2016. Is this still the case? If so, is there any other way to backup or extract the settings from the hub?

\n", "Title": "Can I backup/export settings from SmartThings?", "Tags": "|samsung-smartthings|data-portability|", "Answer": "

I don't have SmartThings, but do have Google, so here is half an answer.

\n\n

Comments above from me and Prashanth to SmartThings forum(*) discussions (here, and here) indicate that backup / restore is not available to the casual user.

\n\n

However, the SmartThings developer documentation (which seems quite extensive), give information about overall preferences and settings here and here.

\n\n

And this discussion on the forum appear to show how to programmatically set preferences, which is half of your question answered (and, of course, of little use if you can\u2019t read the preferences to back them up).

\n\n

I would suggest reading the copious documentation and, if necessary, asking on the SmartThiungs forum, to discover how to backup preferences and settings.

\n\n
\n\n

(*) which might be the best place to ask, rather than here. Great as we are, here on SO, I would always recommend asking in a dedicated forum first, for any subject, before asking here.

\n" }, { "Id": "828", "CreationDate": "2017-01-18T15:35:33.407", "Body": "

I previously asked about what you can do if Alexa is triggered by a television programme, but recently I realised something strange: The Echo does not respond to the voices in adverts for the Echo, even if voices say \"Alexa, play ...\" or \"Alexa, set a timer for ...\".

\n\n

I searched on a few other Echo communities, and found a Reddit post that suggests that this is common/intended behaviour. There isn't a definitive answer in the thread, though, so I thought I would ask here to see if someone knows a little bit more.

\n\n

How does my Echo know not to answer to a TV advert? Is it just a co-incidence or is there something that tells Alexa not to react?

\n", "Title": "Why doesn't the Amazon Echo respond to advertisements or reports about Alexa?", "Tags": "|amazon-echo|", "Answer": "

When mixing the advert's audio, they simply remove some frequencies. This means that Alexa won't be triggered as it will not register it as a voice command, but viewers can still make out what they are saying in the advert.

\n\n

You'll also probably notice that when the command is spoken in the adverts, it sounds a little thin or garbled. This is why :)

\n" }, { "Id": "832", "CreationDate": "2017-01-19T06:19:17.623", "Body": "

I am trying to get messages from an MQTT broker and insert that messages into ZeroMQ. What do I need to do to connect an MQTT broker to ZeroMQ in java?

\n", "Title": "How can we get messages from an MQTT broker and put them into the ZeroMQ queue?", "Tags": "|mqtt|zeromq|", "Answer": "

Basically what you need to do is merge together the MQTT subscriber code with a ZeroMQ sender in Java, such that when you receive a message from the MQTT queue, it gets transferred to the 0MQ for 0MQ listeners to receive.

\n\n

I haven't used MQTT from Java, but a popular library seems to be Paho.

\n\n

The 0MQ documentation and example code is excellent, and a Java example can be found here.

\n" }, { "Id": "834", "CreationDate": "2017-01-19T09:16:19.267", "Body": "

This question is about understanding the internal workings of the CSR8675 Bluetooth audio chip

\n\n

These four terms keep popping up while working with the CSR8670/8675 chip

\n\n
    \n
  1. VM (Virtual machine)
  2. \n
  3. Firmware
  4. \n
  5. MCU (Microcontroller Unit)
  6. \n
  7. Kalimba DSP
  8. \n
\n\n

Could someone please explain in detail what exactly is the difference between them? I have some understanding of the differences between VM and firmware, and I believe that the kalimba DSP can be considered a completely separate processor just packaged inside the same 8675 chip, but where does the MCU fit into all of this? Is the bluetooth stack a part of the MCU as well?

\n\n

\"block

\n\n

8670 datasheet can be downloaded here

\n", "Title": "What is the difference between MCU, VM, firmware and Kalimba DSP in the CSR8675 chip?", "Tags": "|microcontrollers|audio-dsp|csr-adk|", "Answer": "

You are correct the, DSP is a separate processor within the CSR8675. It has its on program and data memory.

\n\n

All image are taken from the linked datasheet.

\n\n

\"block

\n\n

The DSP (Digital Signal Processor) is a dedicated processor. It has additional hardware units, parallel instruction execution support that gives you a better platform with better performance to process audio, video and such signals where huge amounts of data has to be processed in short time. Check the link for more details.\nThe audio handling part of your software should be implemented on this processor.

\n\n

The MCU is a more general unit, the datasheet calls it \"application processor\". It is used for the higher logic of your application. While the DSP handles the audio signals, general thing such as LED driving, capacitive sensing and USB connection can be handled by this MCU.

\n\n
\n

The BlueCore\u00ae CSR8670\u2122 BGA consumer audio\n platform for wired and wireless applications integrates\n an ultra-low-power DSP and application processor\n with embedded flash memory

\n
\n\n
\n\n

As for the firmware and VM. Page 104 gives you a comprehensive figure of the software.

\n\n

\"model

\n\n

The firmware means the whole software of the device and consists of different parts.

\n\n\n" }, { "Id": "838", "CreationDate": "2017-01-19T18:30:40.620", "Body": "

There are several brands of door locks supported by SmartThings, all of which could be used to secure someone's front door. After some research, I found it was possible to create a virtual switch to control the lock, so that Google Home can control it.

\n\n

However, in that thread, there's an interesting point about controlling locks through a voice assistant:

\n\n
\n

One concern people have raised. If you enable voice control of locks, someone could stand outside your house and tell your voice system to unlock the lock.

\n
\n\n

Surprisingly (or perhaps not!), this has already happened with an August Smart Lock controlled by Siri:

\n\n
\n

Last Friday morning, as Marcus was pulling out of the driveway, Mike [Marcus' neighbour] walked up with a grin asking if he could borrow some flour. Marcus responded sure and started to get out of the car to let him in. But before Marcus could do anything, his neighbour said, \"I'll let myself in,\" and ran over to the front door. He then shouted, \"Hey Siri, unlock the front door.\" The door unlocked.

\n
\n\n

For the setup of a Google Home controlling a SmartThings lock that I proposed at the start of the question, how can I avoid intruders or unwanted visitors from opening my front door? Is the only viable option simply disable it altogether, or might it be possible to modify the setup so that it is more secure?

\n", "Title": "How can I securely support unlocking my door through Google Home?", "Tags": "|samsung-smartthings|google-home|", "Answer": "

I use Alexa, it does recognizes voices. Also, my Alexas are not placed that close to the door. I also have double glazing. So, you would have to shout really really loud, it will probably need a loud speaker.

\n

On the other hand, in modern houses, ingress is hardly an issue, just break any of the glass panes on windows or doors.

\n

I have what is called an electric door strike for my front and back doors. They are controlled by a secure keypad, for us to use to come in keylessly. But it is controllable by Alex as well.

\n" }, { "Id": "840", "CreationDate": "2017-01-19T21:18:26.360", "Body": "

I searched for \"MQTT Rules Engine\", and came across the HomeWSN project. It looks like it's almost trying to be a home automation system based on MQTT messaging, which could be very useful. Has anyone deployed a real system based on this tool?

\n", "Title": "Any real-world HomeWSN deployments?", "Tags": "|smart-home|mqtt|", "Answer": "

It's been nearly 7 years since I asked this question, and a web search in 2024 still shows no relevant results. I'm going to answer this as "no, there have been no real-world deployments of HomeWSN."

\n" }, { "Id": "845", "CreationDate": "2017-01-20T16:48:58.753", "Body": "

Version 4812 of the Echo software apparently includes the new 'Computer' wake word, which seems really exciting. However, my device is still running version 4540 and hence doesn't have the wake word yet.

\n

The Amazon documentation says:

\n
\n

Your Alexa device receives software updates automatically over Wi-Fi.

\n

To download the latest software update for your Alexa device:

\n\n
\n

However, it doesn't explain how to actually initiate the updating process. Can I force my Echo to update in any way, or do I just have to wait for it to perform the self-check?

\n", "Title": "How do I force my Amazon Echo to update to the latest version?", "Tags": "|amazon-echo|", "Answer": "

I found a slight variation of the above steps to work:

\n\n
    \n
  1. Unplug for 10 seconds
  2. \n
  3. Plug back in and immediately tap mute once (the microphone will turn red)
  4. \n
\n\n

Your unit will then immediately update. Takes about half an hour to complete.

\n" }, { "Id": "847", "CreationDate": "2017-01-20T18:08:38.477", "Body": "

Considering a typical type of application, a battery powered sensor taking readings (32 bit value) every 10 minutes, what is the likely impact on battery life if I choose a simple un-encrypted on-air protocol, compared with an encrypted transmission?

\n\n

Assume that my data isn't particularly secret, but according to this question I probably need to consider encrypting it, so long as there isn't actually a significant design cost.

\n\n

For simplicity, let's assume I'm using a nRF51822 SoC which supports a BLE stack and a simpler 2.4 GHz protocol as well.

\n\n

Since I'm thinking of a commercial product application rather than a one-off installation, the encryption needs to be compute intensive to break (say at least $500 of 2016 cloud compute), rather than a simple obfuscation. Something that remains secure even with access to the device firmware.

\n", "Title": "What is the power implication of encrypting my sensor traffic?", "Tags": "|sensors|microcontrollers|power-consumption|product-design|", "Answer": "

The bulk of your power will likely be expended on RF transmission, not CPU cycles spent in encryption routines. Every additional bit transmitted will cost you more power than the encryption you're proposing. That means if you take a naive approach, like using AES in CBC mode, you risk increasing the message size to carry the extra bits in each block.

\n\n

If you determine your business needs the data to be encrypted, consider using AES in CTR mode to generate stream cypher bits. Counter mode is practical for dealing with cases where reception can be unreliable and packets may be lost. You'll have to keep the counters synchronized, so be aware that periodically transmitting the counter's value will add to the overhead. And you'll have to reserve a few bytes of state to hold the counter, because reuse of an encrypted bit stream can lead directly to data recovery.

\n" }, { "Id": "853", "CreationDate": "2017-01-21T11:15:22.370", "Body": "

I want to have custom wake-word for Alexa so I want to interface this Amazon Echo with an external device. I am wondering if it is possible to create a proxy device that would wake up Alexa if I give voice command to the proxy device. More precisely it should be able to switch Alexa between its following states.

\n\n
\n \n
\n\n

The idea is simple. The device would be capable of recognising words, just some words nothing too difficult.

\n\n

By default, it would keep Alexa in Microphone Off state, so it won't pick up voices from its environment.

\n\n

Now, when I want to use Alexa, instead of waking it up directly I would use my proxy, that would somehow enable Alexa's microphone and switch Alexa into Listening state.

\n\n

When Alexa goes back to Idle the proxy should automatically switch it to Microphone Off state.

\n\n
\n\n

What I need in general are:

\n\n
    \n
  1. The proxy should know Alexa's current state. Won't be the best solution but I may can decide Alexa's current state using its attention system (its sound and LED signals are summarised here). Is there any other way I can know Alexa's current state?

  2. \n
  3. The proxy should be able to switch Alexa into a specific state. So how can I make Alexa to switch between its states using another device?

  4. \n
\n\n

It all comes down to what are the possibilities to interface an Amazon Echo / Dot (and Alexa) with another device?

\n\n

(I am interested in solutions using mechanical interaction as well.)

\n", "Title": "How can I detect Alexa's current state or change its current state with an external device?", "Tags": "|alexa|amazon-echo|hardware|", "Answer": "

This open source Raspberry PI Alexa client has support for free wake words made as easily from PI's terminal as:

\n\n
sudo systemctl stop AlexaPi.service\nsudo nano /etc/opt/AlexaPi/config.yaml\n\nchange line:\nphrase: \"alexa\"\n
\n\n

See discussion.

\n" }, { "Id": "854", "CreationDate": "2017-01-21T16:41:13.983", "Body": "

According to Cognosec, there is a critical vulnerability with ZigBee (one of the protocols supported by the SmartThings hub) which allows attackers to gain access to a ZigBee network by abusing a feature called \"Insecure Rejoin\". This forum post has an accessible yet detailed explanation of the issue for futher context.

\n\n

I found a section in the SmartThings FAQ about the issue, which seems concerning:

\n\n
\n

The current ZigBee Home Automation 1.2 standard uses encryption to allow only authorised devices to join a home network. In order to allow some devices (like motion sensors) to drop off of, and then easily re-join the network (to preserve battery power), there is a feature known as \u201cinsecure rejoin\u201d built into the standard. It has been shown, however, that in very specific cases this feature could potentially be used to gain unauthorised access to a ZigBee network.

\n
\n\n

According to that FAQ, Insecure Rejoin is enabled by default. Is this true, and does it mean that virtually all SmartThings hubs are vulnerable to attack?

\n", "Title": "Is \"Insecure Rejoin\" still enabled by default for Samsung SmartThings hubs?", "Tags": "|security|samsung-smartthings|", "Answer": "

I just recently discovered this issue while upgrading my SmartThings Hub to a newer model.

\n\n

It appears that newly added Hubs to your account do not have Insecure Rejoin enabled, but my older model did have this feature enabled. So, perhaps Samsung has updated their systems to protect the users.

\n\n

Below are screenshots showing how to review/change your configuration for this security feature. Starting from your location 'context menu':

\n\n

\"Click\"Scroll\"Click\"Change

\n" }, { "Id": "857", "CreationDate": "2017-01-22T11:47:41.517", "Body": "

According to this comment on Reddit, Philips Hue bulbs reset to 100% brightness after any power interruption (e.g. a power cut, switching the physical light switch off then on, etc).

\n\n

This does seem like a useful safety feature, but it's not practical for lights in my bedroom; if there's a power outage and then the power comes back on, the lights switch back on at 100% brightness, waking me up again. For areas where the power supply occasionally has problems, this could be a really big problem - imagine being woken up multiple times per night if the power cuts out, even for a couple of seconds!

\n\n

Is there any way I can prevent Philips Hue bulbs from returning to 100% brightness after a power reset? Official solutions or workarounds would both be helpful.

\n", "Title": "How can I stop my Philips Hue bulbs resetting to full brightness after a power cut?", "Tags": "|philips-hue|ac-power|", "Answer": "

Hue Restore App might help you.

\n\n

Hue Restore- Hue Restore let you save and restore all Philips Hue lights in your home with single tap. You can do this right from home screen of your phone with convenient home screen widget.\nYou no more have to worry about those pesky power cycle resets because you can get back your favourite settings in a single tap.

\n\n

Disclaimer: I am developer of this app.

\n" }, { "Id": "861", "CreationDate": "2017-01-23T16:20:37.817", "Body": "

Is it possible to build a custom skill that would be equivalent to pressing the microphone on/off button on the top of the Amazon Echo?

\n\n

I know from this article on How-To Geek that such voice command is not available by default:

\n\n
\n

One feature we found missing, and surprisingly so given that the whole appeal of the Echo is voice control, is the inability to turn off the microphone via voice command. If you issue a command to Alexa like \u201cAlexa, turn off the microphone\u201d she\u2019ll cheerily announce that there are no connected home devices that fit that description and give you instructions on how to set up the connected home features of the Alexa/Echo system.

\n
\n\n

Does this mean this feature is also unavailable via API calls as well?

\n\n

And if it's not possible, why does Amazon not support this feature?

\n", "Title": "Can I ask Alexa to turn off its microphone by voice command?", "Tags": "|amazon-echo|alexa|", "Answer": "

The direct answer is no.

\n

Workaround 1: hardware interrupt that can be activated via app across a battery of devices; difficulty level: hard.

\n

Workaround 2: Since you can change the wake word of any echo device via the app and android phones have a variety of batch (tap macro) apps available, hypothetically speaking, you could create a macro that will go through each device in the Alexa app's list and change it's wake word to something other than your regular one. You can then create a second macro that would undo this. Difficulty level: medium.

\n" }, { "Id": "865", "CreationDate": "2017-01-24T14:36:42.547", "Body": "

I am hoping to be able to use a smart plug with my window air conditioner. I currently use the TP Link HS 100 plugs around my home, but I am unsure if they could handle the amount of power an air conditioner uses. Any suggestions?

\n", "Title": "Can I use a smart plug with an air conditioning unit?", "Tags": "|smart-home|smart-plugs|tp-link|", "Answer": "

In 2024, many heatpumps are C-Bus compatible. In my main home. I am using a C-Bus adapter that has rest calls. I found a handler and paid for it, so I didn't have to write the interface code for it.

\n

For my parents-in-law's house, I installed an Aeotec Heavy Duty Switch, it uses Z-wave. A couple of specs.

\n\n

I have also used the same device at home to control my hot water heaters and the Spa and Pool pumps. For these, I just turn off power off to the whole gadget and stage them in, one at a time, to take advantage of solar power.

\n

Note - I am not associated with Aeotec in any way.

\n" }, { "Id": "866", "CreationDate": "2017-01-24T16:25:26.613", "Body": "

According to Amazon, Alexa can read certain Kindle books.

\n\n
\n

Alexa reads Kindle books eligible for Text-to-Speech (an experimental reading technology that allows supported Amazon devices to read Kindle books aloud).

\n
\n\n

Concept

\n\n

If it is possible I want to use this feature but instead of reading Kindle books, Alexa should read custom texts or reports made by some smart-home devices. So during the day different devices would report different events like:

\n\n\n\n

Basically a service would collect all the data from the sensors and would create a report file that could be used with Alexa like:

\n\n\n\n

(So I could ask Alexa at the end of the day \"What happened today?\" and it could tell me by reading the reports.)

\n\n

Problems

\n\n

The reports should be in a correct format to make them eligible for Alexa to read them. I found something about it on Amazon forum, Can I enable Text to Speech on any personal document?

\n\n
\n

Only if that document will open in the Reading App. Word documents, for instance, that have to be opened in a Word Process app or PDFs that have to \n opened in a PDF Reader can't use the Text to Speech feature.

\n \n

All documents in a Kindle compatible format should have Text to Speech\n available but I send all mine via the Amazon Cloud and Amazon converts\n them to the Kindle format. A tap in the center of the screen reveals\n the \"Play\" icon in the bottom left hand corner.

\n
\n\n

Also I found an app on Amazon which called \"Pdf to Speech\" and Amazon's Kindle Direct Publishing tool \"KindleGen v2.9\", but still unclear how it should be done.

\n\n

Possible solution

\n\n

One way I have found on Reddit describes the following:

\n\n
\n

You don't need a kindle device, but you will need to download the kindle app. This gives you a special kindle address, to which you'd mail the PDF, which puts it in your kindle library.

\n \n

I have several devices, each loaded with the kindle app. So I have several kindle addresses, one per device (me-ipad@kindle, me-nexus@kindle, etc).

\n \n

The good news: once you email it, the PDF lives in the Kindle Cloud, so it's accessible to all...... Amazon related Kindle page

\n \n

To have Alexa read it: 1) open Alexa app, go to books, tap desired book; or 2) Alexa, read (title) ..... Amz related Alexa page

\n
\n\n
\n\n

All in all, is this Kindle compatible format is actually .mobi? What steps should I perform to make the reports available by Alexa? (I would like to avoid the e-mailing stuff first if possible.)

\n", "Title": "How to read custom documents by Alexa?", "Tags": "|smart-home|amazon-echo|alexa|", "Answer": "

You can use skills such as the My Reader skill, which can read any text you send to it through its servers.

\n\n

Once you have set it up, the steps are as follows.

\n\n
\n

How to use - Quick Start

\n \n
    \n
  1. Send the URL to 619-473-2337 (6194READER) from your phone by following steps for different browsers on your phone:\n https://s3.amazonaws.com/reader.help/How_to_Register_Phone_Number.pdf

  2. \n
  3. In a few seconds, you will receive message with an article index number, total chapter count and the article title.

  4. \n
  5. Launch the skill:\n \u201cAlexa, ask My Reader to read.\u201d

  6. \n
\n
\n\n

There are a number of other skills which do a similar function, such as Text to Voice, depending on what exactly you'd like to do.

\n" }, { "Id": "868", "CreationDate": "2017-01-24T18:20:42.910", "Body": "

Samsung wearables like the Gear watches can run the Android app S Health are advertised to be quite secure and can be run by using the Samsung Knox security feature which is in itself advertised as certified by government agencies and so forth.

\n\n

However there seems to be no easily available information if my health data is just securely stored on the device or if it's automatically stored on a cloud storage as well.

\n\n

This section from the App description seems to point to a rather free usage of at least the step count.

\n\n
\n

Compete with your friends and check your ranking. You can compete with your friends in the address box once your Samsung account is registered. On \"together\" section, you can select your own competitor and compare your steps with people of different age group across the globe.

\n
\n\n

Which data of S Health devices is stored where and how can I control it?

\n", "Title": "Are Samsung's \"S Health\" devices storing health data in the cloud?", "Tags": "|privacy|wearables|", "Answer": "

Rather a lot of personal information is collected, according to the Privacy Policy. Here are some of the more sensitive pieces of information collected:

\n\n

When not logged in

\n\n\n\n

When using 'Enhanced Features'

\n\n\n\n

When logged in

\n\n\n\n

As you can see, with all those data points, someone could get a clear picture about your health, activity and location. Samsung even admit in the Privacy Policy:

\n\n
\n

Please note that such wellness-related information can reveal your state of health and can therefore consist of sensitive personal data.

\n
\n\n

However, I'm not surprised or even upset that Samsung collects that information; without collecting it, the fitness tracking wouldn't work very well at all.

\n\n

Samsung are a little vague about where the information is stored:

\n\n
\n

We use a variety of standard security measures, including encryption and authentication tools. When you access information, we offer the use of a secure server.

\n
\n\n

Your guess is as good as mine!

\n\n

If you're not comfortable with this, European data protection regulations help a lot:

\n\n
\n

You may also have statutory rights to access and edit such information and you have the right to request information about your personal data. Furthermore, you may refuse the disclosure of your information to a third party at the moment the information is collected. If you have any questions about the information we hold, please contact our customer service department at http://help.content.samsung.com or the European Data Protection Officer, Samsung Electronics (UK) Limited, Samsung House, 1000 Hillswood Drive, Chertsey, Surrey KT16 0PS.

\n
\n\n

Not convenient, but in theory, you can ask Samsung to stop collecting data at any point, or request all of the information they have on you.

\n" }, { "Id": "869", "CreationDate": "2017-01-24T18:21:10.427", "Body": "

As far as I can tell, most data from the Amazon Echo (e.g. recordings of my commands) are stored in the cloud, according to the Alexa FAQ. However, I couldn't find any authoritative information about what information is stored on the device itself.

\n\n

A previous question I asked suggests that short snippets of sound are stored on the Echo itself so that the wake word can be detected, but apart from that, I'm not sure.

\n\n

If I ever wanted to sell the Echo, it'd be useful to know what information is on the device, so that I can try to remove it.

\n\n

What personal information is stored on the device itself (not in the cloud)? Amazon login credentials? Cached data from skills?

\n", "Title": "What personal information is stored on my Amazon Echo?", "Tags": "|amazon-echo|privacy|", "Answer": "
\n

If I ever wanted to sell the Echo, it'd be useful to know what information is on the device, so that I can try to remove it.

\n
\n\n

I think one good option is to deregister the echo device from you Amazon account as this video guide shows.

\n\n

Here are the steps:

\n\n
    \n
  1. Go to the Alexa app.
  2. \n
  3. Click 'Settings'.
  4. \n
  5. Select the device that you want to deregister.
  6. \n
  7. Scroll to 'Device is registered to ...', and click 'Deregister'.
  8. \n
  9. Confirm that you want to deregister when the modal pops up.
  10. \n
\n" }, { "Id": "873", "CreationDate": "2017-01-25T14:55:01.570", "Body": "

The camera in question is a Ring Stick Up Cam. I want to add an additional layer of security and prevent anyone from peeking (listening) into my home.

\n\n

I thought that it might be good if I could set up some kind of IP whitelist, so only requests from authorized sources would get through towards the camera.

\n\n

I have a ZTE Speedport Entry 2i router + modem by my Internet provider (Hungarian manual just for the picture) (English manual it seems to be the same device).

\n\n

Is this a reasonable idea and if so how could someone set up such service for an IP camera?

\n", "Title": "How can I set up IP whitelisting on an IP camera without support for whitelists?", "Tags": "|smart-home|security|privacy|digital-cameras|whitelisting|", "Answer": "

Have to start out by saying, this will have to take place on the router. I looked into the camera, but it simply seems to be too manufacturer set to be able to run a crack that complex on. Perhaps if you did some firmware replacement you could manage, but not simply.

\n\n

With your particular router, it appears that you can. I don't actually have your router, so I could be reading the documentation wrong, but it appears that you should be able to. IF I'm reading right, follow these steps: (adapted from cosmote documentation)

\n\n
    \n
  1. Go under Internet > Security > Filter Criteria.
  2. \n
  3. Select the radio button on beside URL filter.
  4. \n
  5. Select New Item.
  6. \n
  7. Type in any name and the IP of your camera and Apply.
  8. \n
  9. Click IP filter - IPV4 to open the IPV4 filters page.
  10. \n
  11. Edit the settings under Destination IP and Source IP range to match your requirements.
  12. \n
\n\n

I could be wrong, but that appears to be the method. :) You might have to apply multiple rules to rule out all but the IP's you want: I'm not sure.

\n\n

If you are unable to block all traffic except the IP you want on your router, the answer is, no, it is not possible, short of buying a new router.

\n\n

Hope it works!

\n" }, { "Id": "875", "CreationDate": "2017-01-25T18:31:33.000", "Body": "

I was reading about the privacy of Fitbit devices, and this Huffington Post article has a rather concerning point with regard to GPS-enabled fitness trackers:

\n\n
\n

In certain cases, the government or legal institution could request your fitness tracker information and then use it against you in a court of law. That\u2019s what happened to Chris Bucchere, a San Francisco cyclist who struck and killed an elderly pedestrian. Bucchere was charged with felony vehicular manslaughter, carrying a potential penalty of six years in prison. Prosecutors obtained his data from his GPS-enabled fitness tracker to show he\u2019d been speeding before the accident. Bucchere\u2019s self-monitoring became a piece of evidence against himself due to a lack of privacy. This is not to condone Bucchere \u2014 clearly he committed a crime \u2014 rather this just illustrates one example of surprising use cases for what you might think is harmless personal data.

\n
\n\n

Clearly, I would rather not have my GPS location recorded at all times, in case the data gets hacked or given to anyone without my permission. The Fitbit Surge is one of these GPS-enabled fitness watches - how can I disable GPS tracking and clear any information they have on me? Is the GPS data also synchronised to the cloud?

\n", "Title": "How do I stop the Fitbit Surge from storing GPS data?", "Tags": "|privacy|gps|fitbit|", "Answer": "

According to the FitBit community, the GPS is only enabled when you have turned on a tracking activity, example, walking, hiking, biking, etc. From the Fitbit Community:

\n\n
\n

GPS is only when you are tracking an activity that uses GPS, such as hiking. As soon as you stop tracking that activity GPS is turned off.

\n
\n\n

and

\n\n
\n

By design, it should only turn on when you have started an (outdoor) exercise using the controls on the Surge - that means, Run, Free Run, Walk, Hike, Bike, or Golf. If you haven't started one of those exercises, there wouldn't even be a way of telling if the GPS is on or not.

\n
\n\n

In other words, to keep the GPS off, just don't run any of those controls. If you want to have a hike registerd, just put it in \"work out\" mode and rename it as a hike later. (From the FitBit community)

\n" }, { "Id": "880", "CreationDate": "2017-01-26T13:46:04.517", "Body": "

I am planning to measure water level in a well, which is about 10 m deep with maximum water level up to 5 m. My plan is to use ultrasonic sensor HC SR04 to measure depth, transmit it via ZigBee to a Raspberry Pi inside my home.

\n\n

As discussed in my previous question I need to select a micro controller to connect the ultrasound sensor and the ZigBee module together.

\n\n

The parameters for selection is:

\n\n
    \n
  1. Low power: I am planning to run this on battery, so low power usage is a priority. As of now I do not have any target for power usage or days between battery changes or even which battery to use. Since this is more of a learning project and it's in my home, I am flexible, but lower power usage is better.

  2. \n
  3. Low cost: This is a learning project for me, and I do not want to spend an outrageous amount of money on this, so lower cost is better.

  4. \n
  5. Working inside a well: The whole project will be working from inside a well and will be exposed to harsh sunlight and rain. I will be providing a good case and protection though.

  6. \n
  7. Easy to program.

  8. \n
\n\n

I chose ZigBee as it is simple, meets my use case and low power. But my requirement is to transport the sensor data and I am open to other transports. The distance from my well to Raspberry Pi is about 6 meters with a wall in between. I am planning to measure the water depth every 10 minutes and twice a minute when the water pump is running (approx 20 minutes daily) .

\n", "Title": "Selecting a microcontroller for a battery operated data collection project", "Tags": "|microcontrollers|hardware|zigbee|", "Answer": "

Looking at ease of programming and low cost, I would probably start with some kind of Arduino module (or low-cost clone). Code for your ultrasonic sensor already exists, as does example code for ZigBee, for example using the Digi XBee modules. On the latter, you connect the XBee to a serial port, and after making the connection with the venerable old \"AT\" command interface, you then have a point-to-point channel that you can send any text down (to your Raspberry Pi). ZigBee is not the cheapest type of short-range communication, but the XBee modules have fallen in price in real terms over the last 5 years.

\n\n

I know that some people have a problem with the C/C++ based language used on Arduino, but in this case you'd largely be merging together already existing scripts from other users.

\n\n

If you Google around for \"Arduino sleep mode\", you'll find examples of how you can put the Arduino into low power mode, and wake up sporadically to take a reading, communicate it, then re-enter sleep mode.

\n" }, { "Id": "882", "CreationDate": "2017-01-26T17:27:47.000", "Body": "

I was recently asked if Alexa can ever speak without prompting, so I thought it'd be helpful to ask here to make sure I'm right; as far as I know, Alexa will never, ever speak without the wake word, and the only unprompted sound it will make is the alarm sound.

\n\n

This TechCrunch article seems to agree that there isn't any way to make Alexa speak unprompted, but it doesn't mention Alexa skills at all; is there perhaps some API available to them which isn't yet used?

\n\n

Many people seem to be interested in this so that they can get Alexa to say certain phrases, such as perhaps an alert if the doorbell is ringing, or some way of indicating an event has happened.

\n\n

Can Alexa speak without first being prompted by either the wake word, tap-to-talk or push-to-talk (depending on the device)? I'm excluding alarms for the purpose of this question, but solutions using custom skills are fine.

\n", "Title": "Can Alexa ever speak without being prompted?", "Tags": "|alexa|", "Answer": "

As time has passed on, I think the answer to this question now needs to be:

\n\n

Yes, Alexa can speak without being prompted. Specifically, she can utter anything you want her to!

\n\n

The convenient tool you can use is a shell script named alexa-remote-control. A detailed documentation of the script is available in this blogpost, although only in German.

\n\n

It relies on http POST requests to achieve things like playing music, radio, activating the daily briefing and letting your Echo devices speak any text you want them to.

\n\n

The text-to-speech feature can be used in Linux, e.g., by executing this command in a terminal:

\n\n
alexa_remote_control.sh -d \"Your Echo's name\" -e speak:'Welcome back buddy!'\n
\n\n

I use it frequently within Node-Red running on a Raspberry Pi, e.g. to issue warnings once some sensor reading moves outside its normal range.

\n" }, { "Id": "885", "CreationDate": "2017-01-26T21:11:48.920", "Body": "

According to cmswire.com, one of the major security risks with the Internet of Things is Insufficient Authentication/Authorization. When it comes to the Internet of Things, should I use a different password for each of my devices, or is it okay to come up with one very secure password to use on all my devices?

\n\n

More specifically, if I have bought multiple iterations of the same device, should I come up with a different password for each one?

\n", "Title": "Should I use a different password on each IoT device?", "Tags": "|security|", "Answer": "

Many vendors have bad security practices and ship all their devices with an identical default password (which is easier than programming and labeling each device with a unique password or mandating a password change before it can be used).
\nWhen such devices are accessible online it becomes trivial to find them and use such default credentials to abuse them at scale.

\n\n

The fact that you make effort to change the default password is already sufficient to thwart a large part of that potential abuse, almost regardless of the actual strength of the password you select.

\n\n
\n

Is it okay to come up with one very secure password to use on all my devices?

\n
\n\n

Re-using the same password in many places is universally a bad idea.

\n\n

The main problem is that good security is hard and you can't really tell from the outside if your really good password will be properly secured or not, at least not until the moment that it becomes clear that the security failed, or there was never any security in the first place.

\n\n

For instance even the best password in the world is useless if the device/application/website will effectively hand over that password, in clear text, when asked correctly. It would be especially bad if that password can then subsequently be used to unlock many more devices/applications/sites/secrets.

\n\n

If you don't have a password manager already and don't want one either simply labeling devices with a unique password is quite effective and secure against digital attacks, as is an old fashioned notebook.

\n" }, { "Id": "891", "CreationDate": "2017-01-27T10:47:01.973", "Body": "

I am a newbie in IoT and want to start my career in IoT. As I search on Google for startups in IoT, I found many blogs. And I found the languages used in IoT like C#, Java, Node.js, and the microcontrollers like Arduino, Raspberry Pi, Intel, Netduino, etc.

\n\n

As I am new to IoT I don't know which language is best and which microcontroller I use for a startup?

\n\n

For the basic startup I say, I want to create a device that have the display that show the weather for the location given from my mobile. So it may be a good example for startup that covers the hardware, Internet and the software.

\n\n

Device will be a battery-powered, a small digital display and yes cost restriction.

\n\n

Which microcontroller and language should I use that fulfils my requirements for showing the weather?

\n", "Title": "Which microcontroller and programming language should I use for an Internet-enabled weather display?", "Tags": "|microcontrollers|hardware|", "Answer": "

Creating a startup is not about what you can do with the technology and not even about the product. For a successful startup that can captivate VC's you should first think about the market that you are going to serve. But thinking about the market you will serve is not enough. You need to have real data about the market. It is not just about something that makes sense to you. Creating a product and then tying to sell it is not a successful approach and that is the way most unsuccessful startups end. A market is a REAL NEED. When you crate a product create it to address a specific Market. This is what makes a successful product, a product that sell itself because people are already looking for it. VC's invest only on startups that have such products especially if they are already selling.

\n\n

To chose a technology to develop your product first you need to know what your product needs to do, this is how is it going to solve the problem in the selected Market. Then look at what the potential customers are willing to pay for it. Then chose the technology which allows the fastest time-to-market while keeping the cost within the budget. Then outsource the development or get a partner that can do it and is willing to work with you. Share the profits 50/50 with your partner. \nThen when you have a prototype, start laying-out your business plan and remember that you can only captivate VC's if you show them how they can make money.

\n\n

If you need to lower the cost of your product for mass production you can use lower level languages and less resourceful micro-controllers like Microchip PIC or Silicon Labs EFM with ASM/C/C++. If the product is not going for mass production (100k+) use a higher level language and more resourceful micro-controllers, like Micro Python or Lua with ARM32 MIPS, or even Linux with ARM32/64. This saves on the development costs but increases the price of the hardware. Remember, the price of the product is not just a PCB with components; development, housing, packing and everything else necessary to sell the product should go into it's cost. Put that in the business plan. And don't go to a VC with an Arduino or a Raspberry pi or an Onion or any thing that looks like a hobbyist gadget, make a proper PCB with your logo on it and use a nice housing to make it look like a final product, VC's rarely trust hobbyist gadgets.

\n\n

Start up, not down, and best of lucks.

\n" }, { "Id": "894", "CreationDate": "2017-01-27T13:45:02.467", "Body": "

I recently ran across this quote from Security Intelligence about the Internet of things and IPv6:

\n
\n

Analysts predict that there will be 30 billion connected \u201cthings\u201d by 2020, yet the IPv4 address space only accommodates 4 billion and change. Even with network address translation (NAT) and private address space, the IoT\u2019s appetite for addresses will overcome IPv4\u2019s ability to sate it.

\n

Enter IPv6, which expands the address space to 340 undecillion, or 3.4\u00d71038. Well, it\u2019s technically a bit less than that, since some combinations are reserved; nonetheless, that\u2019s still enough usable addresses to allocate about 4,000 to every person on the planet.

\n
\n

What puzzles me is why the Internet of Things would make any difference to the need to switch to IPv6. It seems to me that the vast majority of Things are connected to a router, hence a need only for one world-wide IP.

\n

For instance, your smart oven's (or whatever) IP is 192.168.0.52, that doesn't prevent your neighbour's Echo from having the same IP, because in order to access that IP from outside your home, you have to go through your home's IP address, ex: 148.238.24.9.

\n

Why would the advent of IoT necessitate the switch to IPv6?

\n", "Title": "Why would IPv6 be necessary for the IoT?", "Tags": "|ip-address|", "Answer": "

There are two reasons.

\n\n

(1) First is simpler, end-to-end connectivity. If both source and destination have public IPv4 (or IPv6, of course) address, they can connect to each other in any direction anytime.

\n\n

Your IoT with private IP 192.168.0.52 however can use NAT ONLY to connect to any public IP on the Internet whenever it wants, but the rest of the Internet cannot connect to it. There were kludges like DNAT and uPNP that used to allow you to specify that some incoming connections are enabled, but they are breaking more and more nowadays due to implementation of CGNAT because of IPv4 shortages.

\n\n

A common (so-called) \"solution\" to this problem is that all your (NATed) devices connect to some central location with public IP (usually hosted by the manufacturer of device). This makes it work technically, but involves a privacy issue (you're giving all the data from your IoTs), security issue (as you're wide open to them, breach or disgruntled employee can do anything your IoT device can do and access), and reliability issue (when the manufacturer goes out of business or decides to stop supporting old devices or is suffering outages) all your (and everybody elses) perfectly functional devices will stop working.

\n\n

(2) second problem is that it will stop working anyway (even for outgoing connections) some time in the future (not in a year or two, but still. The more IoT and services catch on, the sooner it will start breaking).

\n\n

That is because NAT allows private addresses like 192.168.0.52 to reach the Internet at large. It does that by changing source address 192.168.0.52 to public IP of your router, but replaces source port with free one from the pool.

\n\n

For example, your first connection might be 192.168.0.52:1000 might be (CG)-NATed to (public IP) 198.51.100.1:1000, and your neighbour 192.168.0.77:1000 might get NATed to 198.51.100.1:1001. Your second connection from 192.168.0.52:1001 would then be NATed to 198.51.100.1:1002 etc.

\n\n

Problem is, even simple stuff like opening a web page will likely open dozens of connections and use a dozen of ports (for DNS queries, HTTP(S) connection for different elements, JS analytics on different sites etc).

\n\n

More expensive programs, like torrent clients, will easily use up a thousands of ports. And there is only 65535 ports available for any IP.

\n\n

Which means several of your neighbours sharing the same CGNAT IP use a bigger share of connections (and more IoTs will mean more connections), and suddenly all of 65535 ports on that public IP 198.51.100.1 are used. Which means no new connections can be established for you and your neighbours. Which on bigger scale means lots people are cut from their IoTs, and civilisation as we know it collapses :-)

\n\n

Since we would like to delay this civilisation collapse as long as possible, we're transitioning to IPv6 instead. Please support continued existence of this civilisation by using IPv6 if possible. Thanks!

\n" }, { "Id": "902", "CreationDate": "2017-01-27T21:22:28.993", "Body": "

I encountered this message when I tried to disable the SSID broadcast of my home network. Does this mean that my router's Wi-Fi connection will no longer be encrypted?

\n\n

\"Screenshot\"

\n\n

Sorry I'm pretty new to networking but any help/guidance or further reading is much appreciated!

\n", "Title": "Will disabling my network's SSID broadcast cause my WPA encryption to be disabled?", "Tags": "|security|networking|routers|", "Answer": "

No, the connection will still be encrypted, but the services that depend on it like WPS (Wireless Protected Setup) won't work. WPS is a simple way to setup the WIFI connection by pressing a button in the router after setting the connecting device into WPS mode, but when the SSID is not being broadcasted the device can't find it and WPS won't Work. The encryption mode is WPA or WPA2 not WPS.

\n\n

To connect without the SSID you will have to type the SSID manually everytime you want to set up a new connection.

\n" }, { "Id": "906", "CreationDate": "2017-01-28T17:11:01.023", "Body": "

The Register have published an article suggesting that the Nest Cam could be recording even when told to switch off:

\n
\n

Alphabet-owned Nest says there is no truth to the allegation that its internet-connected home CCTV cameras continue to record video even when switched off.

\n

This assertion comes after a report from ABI Research found that the Nest Cam keeps drawing a healthy amount of current even when told to turn off, suggesting it's still observing.

\n

According to the ABI Teardown report, the Nest Cam draws 343mA while off, and up to 370mA or 418mA while on, depending on the resolution of the video being streamed to the cloud.

\n

ABI vice president of teardowns Jim Mielke said that while most surveillance cameras would be expected to drop power consumption when moved to their off state, the Nest camera continues to suck juice.

\n
\n

Is there any evidence that the camera is continuing to record even when told not to, or does power usage simply remain high because it's connected to the network waiting for the 'turn on' command?

\n", "Title": "Is the Nest Cam recording even when \"switched off\"?", "Tags": "|privacy|nest-cam|", "Answer": "

Great question! As has been remarked in the comments, this sounds mostly like a news report rant that hasn't really researched completely. All the articles out there say essentially the same thing: no new research has been done to confirm suspicions, and all articles find their source (eventually) in a single article by abiresearch.com. That article in and of itself is lacking; as @jterrace suggested, it would be very simple to just test data transmission. However, none of these tests have been done.

\n\n

In other words, just from a surface glance, it looks not like an absolute scam, but like an insufficiently tested hypothesis which can't truly be proven from the information given. Furthermore, I ran into a few articles like this one from SlashGear.com, which examines ABI's claims.

\n\n

These articles point out that ABI has yet to explain why they feel that the current ought to drop when the camera is off. We tend to think of it something like a DVD machine or TV screen that turns on in response to Infra Red, but in reality, the Nest Cam functions nothing like the same.

\n\n

A TV only needs to have a small portion of its system booted up. However, the Cam must have the whole system booted up, so as to be able to receive information over the network (drawing electricity) and start the camera as quickly as possible (which is estimated to take 45-60 seconds from cold boot).

\n\n

The current staying at 340mA is not unusual: the camera must keep itself booted up and listening to the router. And according to their spokesman:

\n\n
\n

When Nest Cam is turned off from the user interface (UI), it does not fully power down, as we expect the camera to be turned on again at any point in time. With that said, when Nest Cam is turned off, it completely stops transmitting video to the cloud, meaning it no longer observes its surroundings.

\n
\n\n

So the answer is, no, as far as we can tell, the Nest Cam is not recording you when it's off.

\n" }, { "Id": "911", "CreationDate": "2017-01-30T13:16:41.260", "Body": "

I've recently been reading about Mirai, malware whose source has been revealed which is designed to infect IoT devices. It is appears to be a serious threat to security compromised Internet of Things devices. According to Wikipedia:

\n\n
\n

Mirai (Japanese for \"the future\") is malware that turns computer systems running Linux into remotely controlled \"bots\", that can be used as part of a botnet in large-scale network attacks. It primarily targets online consumer devices such as remote cameras and home routers. The Mirai botnet has been used in some of the largest and most disruptive distributed denial of service (DDoS) attacks, including an attack on 20 September 2016 on computer security journalist Brian Krebs's web site, an attack on French web host OVH and the October 2016 Dyn cyberattack.

\n
\n\n

The article (and others I have read online) shows that Mirai makes the attack by grubbing the internet for devices that are using factory default usernames and passwords from a database. Is it enough, then, to simply change your username and password on an IoT device? Will that protect it from the Mirai attack, or does Mirai have other methods of making it in?

\n\n

Note: I am not asking how to tell if my devices are infected: I am asking whether changing the password is adequate to prevent infection.

\n", "Title": "Will changing my user name and password block Mirai attacks?", "Tags": "|security|privacy|mirai|", "Answer": "

Mirai's source code has been released in public, and Jerry Gamblin has kindly created a GitHub repository so that you can easily look through the code for research/academic purposes such as this.

\n\n

I think you'll get the most authoritative answer by dissecting the code to find out how Mirai finds its targets, so I had a little look around and here's what I found:

\n\n
    \n
  1. There are 61 unique username/password combinations that Mirai is programmed with (these are hard-coded).

  2. \n
  3. The scanner searches only on a limited set of subnets to find targets. These are: 127.0.0.0/8, 0.0.0.0/8, 3.0.0.0/8, 15.0.0.0/7, 56.0.0.0/8, 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/14 , 100.64.0.0/10, 169.254.0.0/16, 198.18.0.0/15, 224...*+, {6, 7, 11, 21, 22, 26, 28, 29, 30, 33, 55, 214, 215}.0.0.0/8. I've grouped the last set of blocks because these were all labelled as \"Department of Defense\" in the comments of the code.

  4. \n
  5. Mirai performs a rather primitive SYN scan to try and find if any ports are open. If you're not familiar with how SYN scans work, they essentially involve sending a TCP SYN packet, which is the normal process of starting a TCP connection. The attacker then waits in hope of receiving a SYN-ACK packet, which would confirm that the target is listening on the specified port. You can read more about the process on Wikipedia.

  6. \n
  7. Any targets that respond with a SYN-ACK are added to a list of potential victims.

  8. \n
  9. Mirai selects a password to try semi-randomly, using some sort of weighting system and attempts to connect using that.

  10. \n
  11. Mirai then monitors to check if its connection was successful

  12. \n
  13. If the connection times out or something goes wrong, Mirai retries for a maximum of 10 attempts.

  14. \n
  15. If all of this succeeds, tough luck. Your device is now infected until it restarts!

  16. \n
\n\n

So, in summary, to answer your question, yes, the version of Mirai known publicly will be defeated if you change the username and password. Anyone who modified their copy of Mirai could have added additional attack vectors though, although you might not class that as the same malware type any more.

\n" }, { "Id": "918", "CreationDate": "2017-01-30T20:22:25.757", "Body": "

The AWS IoT Button (Amazon link) is intriguing, but I saw something quite concerning in the FAQ:

\n
\n

How long will the battery last?

\n

The battery should last for approximately 1,000 presses. When the device battery runs out of charge, there is no way to recharge or replace the battery.

\n
\n

The battery life is reasonable, but what am I supposed to do when the battery runs out? For a $20 button (which is already expensive!), it seems odd that there is no solution when the battery dies.

\n

Does Amazon want you to throw away the old device and replace it when the batteries are out of charge? Is it replaceable in any way, or is the battery permanently connected to the device in a way that it's impossible to replace?

\n", "Title": "What can I do when the AWS IoT Button runs out of charge?", "Tags": "|batteries|amazon-iot-button|", "Answer": "

It seems like that is what Amazon intended. Since you are being credited for the purchase of the device, it is no loss to you and only to Amazon and the Earth itself (waste and whatnot)

\n\n

However, there is a new model coming out on Friday, February 3 which will last twice as long as the original.

\n" }, { "Id": "924", "CreationDate": "2017-02-01T15:01:37.127", "Body": "

Soon I will be working on an Ethernet implementation for a bare metal (no OS) capability on an ARM-based processor. I am somewhat familiar with the Ethernet driver model in the Linux Device Drivers book, but I'm wondering if there is a reference for implementing an Ethernet driver for a SoC run with a custom software stack.

\n\n

Are there any reference implementations for ARM architecture processors, or is there any guidance on how to implement an Ethernet driver on an ARM processor?

\n", "Title": "What is a good reference for Ethernet Driver without an OS?", "Tags": "|networking|arm|drivers|ethernet|", "Answer": "

If you are sure about using ARM then you could have a look at Keil's Ethernet Drivers for ARM. It is quite promising.

\n\n
\n

RL-TCPnet includes several Ethernet Network drivers. These are\n located in the \\Keil\\ARM\\RL\\TCPnet\\Drivers directory:

\n \n \n
\n\n

As you can see there are Ethernet Drivers examples for various evaluation boards that have different chips from different manufacturers. Like Atmel, NXP or ST Microelectonics.

\n" }, { "Id": "927", "CreationDate": "2017-02-01T18:42:28.917", "Body": "

SmartThings v2 Hubs can process some automations locally:

\n
\n

Some preconfigured automations can run locally.

\n

Manual, on-demand control of a device or SmartApp through the SmartThings mobile app always requires an internet connection to the cloud and cannot be performed locally.

\n
\n

The documentation does not list which devices in particular are able to function without access to the Internet. Since the bulbs use ZigBee, it seems logical that they should be able to run locally, but I've heard people say that they had trouble with it.

\n

Can Philips Hue bulbs be controlled locally if the hub's connection to the Internet is lost with automations?

\n", "Title": "Can I control Philips Hue lights locally using Samsung SmartThings?", "Tags": "|networking|samsung-smartthings|philips-hue|", "Answer": "

If Internet connection is lost, BUT electricity is still on AND hub has battery working ok you may have chances to succeed. Since ZigBee connection is there you may have connection between hub and bulb.

\n\n

ZigBee is a technology for quite short distances, so if all works ok and then for example LAN can be used when in normal, then the distance between these two parts of your system becomes a signifigant factor. LAN works ok for long distance but same with ZigBee may be way too much.

\n\n

Answer to your question is that your setup will work, but be aware of the distance of hub and bulb.

\n" }, { "Id": "928", "CreationDate": "2017-02-01T19:01:34.487", "Body": "

Given a ZigBee mesh network with several nodes in it. There are established links between each node via router nodes.

\n\n

If Node A wants to send a message to Node Z for the first time then Node A must perform a Route Discovery to determine which intermediate nodes will forward its message.

\n\n

The Route Discovery mechanism is described here. According to it the route with the lowest cost will be stored in a Routing Tables of the nodes.

\n\n

So far everything is fine, every node knows what to do, they can reach each other.

\n\n
\n\n

Now, an intermediate node, between Node A and Node B breaks down, so the currently stored route becomes unusable.

\n\n

What happens in this case? I imagine that when Node A wants to send a message, it will travel all the way to the broken link where it will get stuck. The last node in the route will send back a message about the failure which will trigger a new Route Discovery by Node A, then a new route will be found and everything will be alright again.

\n\n

It is generally fine (given I was correct); the network recovers. But I am wondering if there are any algorithms or methods that provide a network monitoring feature which continuously checks the state of the links presented in the Routing Tables. So Node A can be notified about the failure before it wants to send another message to Node Z, and instead of running into a dead end, it can start with a Route Discovery at once. So basically what I'm thinking of is a service which periodically checks the links.

\n\n
\n\n

I understand that as ZigBee is usually used on battery powered, low-power devices such a mechanism would be not energy efficient.

\n\n

So in general what are now the most effective link failure detecting mechanisms that can be used in a low-power, wireless sensor network, especially in a ZigBee mesh network?

\n", "Title": "Automatic link failure detection methods in ZigBee networks", "Tags": "|networking|power-consumption|zigbee|", "Answer": "

From what I've found, it seems that some implementations (e.g. TI's Z-STACK) recommend refreshing the routing table every so often to avoid 'dead' nodes:

\n\n
\n

Yes, I waited 5 to 10 minutes. What is \"some time\"? I have seen cases where it takes a few minutes to recover. For example, if I cycle power on the gateway, it takes maybe a minute or two for the closest nodes to connect, then another minute or two for each successive level. But I waited much longer than this for the mesh to recover from this routing change.

\n \n
\n \n

Yes, it might takes up to many minutes. So, if you want 5 or minutes, will your device come back? It is recommend to call NLME_RouteDiscoveryRequest() periodically to maintain the routing table.

\n
\n\n

You can read more about what NLME_RouteDiscoveryRequest() does in the developer guide (see page 11/12):

\n\n
\n

The following figure shows an example of the many-to-one route discovery procedure. To initiate many-to-one\n route discovery, the concentrator broadcast a many-to-one route request to the entire network. Upon receipt of the\n route request, every device adds a route table entry for the concentrator and stores the one hop neighbor that relays\n the request as the next hop address. No route reply will be generated.

\n \n

Many-to-one route request command is similar to unicast route request command with same command ID and\n payload frame format. The option field in route request is many-to-one and the destination address is 0xFFFC. The\n following Z-Stack API can be used for the concentrator to send out many-to-one route request. Please refer to the ZStack\n API documentation for detailed usage about this API.

\n \n

ZStatus_t NLME_RouteDiscoveryRequest( uint16 DstAddress, byte options, uint8 radius )

\n
\n\n

Fault Tolerance in ZigBee Wireless Sensor Networks is an interesting paper with some more information about how ZigBee networks tolerate node failure. It appears that the implementation used there reconstructed the network when one of the nodes was removed (the exact method of this is not clear, unfortunately), so that the malfunctioning node is no longer included in the mesh. In some cases, this led to sensors becoming 'orphaned' before requesting to rejoin the mesh network via a different route.

\n\n

In summary, from the resources I've found: it depends on your implementation, but most will re-evaluate the routing table reasonably frequently to avoid broken nodes from harming the network. I suspect you'll be able to get a more accurate response if you ask the vendor of your specific ZigBee implementation, since the exact operation will vary.

\n" }, { "Id": "930", "CreationDate": "2017-02-02T02:46:28.483", "Body": "

Currently, I am using RFID technology for my warehouse management. My warehouse spans across 4 acres, I need to deploy so many RFID antennas and RFID readers to constantly scan the area for goods movement (active scanning).

\n\n

What technology is available to help me deploy multiple RFID antennas continuously?

\n\n

Currently, I am using an RFID antenna mounted on the wall and connected to the RFID reader. Every item is labelled with an RFID sticker. I've calculated that I will need 30 RFID antennas to cover the whole area. It also means each RFID antenna have to be assigned a unique identifier. Obviously this is not the practical way to do that. Are there any convenient way to achieve this?

\n\n

Update 05-02-2017

\n\n

Since I have some vehicle/manpower to move the item, another idea is to mount the RFID antenna and RFID reader to the forklift or provide RFID handheld to the manpower. But the downside is I have to stop the movement for maybe 1 day and quickly label each item and manually update the location. So future onwards when the vehicle mounted RFID equipments or RFID handheld will determine the location/movements of the goods. Is this the best way for current RFID technology? Seems like there are no better ways RFID can help in warehouse management.

\n", "Title": "IoT for industrial warehouse management", "Tags": "|hardware|rfid|", "Answer": "

Option 1

\n\n

One solution I have seen and I could recommend is to divide your warehouse in areas and create gateways for accessing to them with the forklift or trucks. Then you setup the antennas on those gateways and force the drives to pass through them in order to track the goods movement. \nBasically it is the same logic behind any shop store with those RFID antennas at the entrance scanning people going in and out, that triggers the alarm as soon as they read a tag that was not killed.\nThe antennas should be powerful enough to scan a whole pallet or even a truck. Additionally you should also consider the type of RFID tag you will use.

\n\n

[Updated]

\n\n

Option 2

\n\n

Consider the case where the forklifts are the only that move the palets around the warehouse, why not setting the RFID antenna to them.\nAdding a geolocation sensor you can know exactly where the palet was stored and if you add a gyroscope you could even know the height of the rack.\nA step further could be to add a weight sensor to the forklift to detect when the palet is picked up and when is drop, plus the palet's weight to calculate the accumulated load of rack.

\n" }, { "Id": "932", "CreationDate": "2017-02-02T10:14:15.523", "Body": "

I was wondering if someone knows a way to convert a ESP8266 to non-WiFi. I.e. have it connected through Ethernet, preferably with PoE possibility.

\n\n

Reason for asking: My plan is to have sensors in a locker which is not a wifi-friendly environment. I want to monitor humidity and temperature in particular. I have a lot of ESP8266 units and like the firmware ESPeasy, thus wanting to stick to that platform.

\n", "Title": "ESP8266 with ethernet connection", "Tags": "|ethernet|interfacing|power-sources|esp8266|", "Answer": "

The ESP8266 was not designed with an Ethernet MAC, but this should not stop you. However, as Sean has said, it imposes a set of pretty severe restrictions on you.

\n\n

You say that you wish to stay with the ESP8266 platform, but if your project cannot deal with the measly data rates provided by using an ENC28J60-style chip, or bit-banging Ethernet, then there is an alternative. The ESP32 has a 10/100 Mb/s Ethernet MAC that only requires a PHY, magnetics, and an RJ45 connector, and the ESP32 modules are just as cheap (if not cheaper) than the ESP8266 ones.

\n\n

The unfortunate downside to this approach is that it does not appear that many ESP32 shields have made it to market yet.

\n" }, { "Id": "938", "CreationDate": "2017-02-02T14:01:28.673", "Body": "

The Chevrolet Cruze (2016) is supposed to have a Wi-Fi hot-spot. I do see it showing up in my available network connections, but how do I find the password? From my memory, the dealer didn't say what it was, and if they did, I don't remember what it was.

\n\n

Is there a way to find my Chevrolet Cruze's Wi-Fi hotspot password on the car? (Obviously, I have physical access.)

\n", "Title": "Connect to Chevrolet Cruze Hot-Spot Wi-Fi", "Tags": "|security|wifi|smart-cars|", "Answer": "

According to OnStar - Connecting to your vehicle's Wi-Fi Hotspot

\n\n
\n

To get your hotspot name (SSID) and password, press the Voice Command button and say \"Wi-Fi settings.\"

\n
\n" }, { "Id": "942", "CreationDate": "2017-02-02T20:29:47.640", "Body": "

I have been looking for an ambient light sensor to use with other IoT smart home devices. In my opinion this would be a commonly requested device for controlling internal lighting. For example if you had smart lighting, you may want to turn the lights off if the sun is shining brightly or turn the lights on if it's cloudy. Weather services could help, but ambient light at the location would be far more reliable.

\n\n

Bonus points for hubless design (i.e. Wi-Fi and not ZWave/ZigBee) and integration with IFTTT, etc is also key.

\n", "Title": "Are there any IoT outdoor lighting sensors?", "Tags": "|smart-home|sensors|ifttt|connector-services|", "Answer": "

I use Aeotec Multi Sensor 6. It has sensors for

\n\n

It is not meant for outside, but I have two outside (front and back of house) under the eaves. They work fine. I make sure they both agree before declaring that its dark outside. I use this control outside lights and closing the blinds. As a bonus, when someone is outside at night, I turn teh lights on as well.

\n

I also use one in each room, to detect motion and lux to decide when to turn the lights on and off, in that room.

\n

Its a Z-wave compatible sensor and can run off a battery or a wired USB power. The latter is the main reason I use this device. Who wants to change batteries all the time.

\n

Finally, it does require a hub/server. But you can directly link the sensor to the light, which will bypass the hub.

\n" }, { "Id": "943", "CreationDate": "2017-02-03T12:14:11.477", "Body": "

I would like to embed a simple mobile modem in my laboratory device. The capability is not really important - I want to be able to send small amounts of data and have no hard requirements on transfer speed or latency. That means that slow data is OK, text message is OK and even manual modulation over a voice line is OK.

\n\n

Assume that I live in an area with generous availability to carriers. If you know of a modem that is applicable to only a part of the world, please give that as an answer with that information. If you know of a modem capable of operating anywhere, all the better.

\n\n

The capability of the modem is not really a concern as much as availability (to consumers), size and price, so GSM, 3G or LTE (even NMT!) matter a lot less than whether I can get my hands on one for a reasonable price and put it in a small shell. This means I, as an individual, want to buy typically one modem at a time.

\n\n

What brands, vendors and retailers can I choose from? Are there any caveats I should be aware of?

\n", "Title": "Embedded modem options", "Tags": "|mobile-data|", "Answer": "

As requested in comments, I am posting this as an answer.

\n\n

I recommend an Orange Pi 2G IoT. They also have a 4G model.

\n\n

Here's the 2G version, which is considerably cheaper & ought to be enough:

\n\n

\"enter

\n\n
\n\n

[Additional] I am very fond of the Orange Pi line, as it has long had things like a SATA controller, etc available and generally has a wider selection than the Raspberry Pi range.

\n\n

However, the Onion Omega 2, is my favourite and, IMO, superior to any pi, but since it has no cellular capability is not the answer here; I just post this to help others).

\n\n

Two things that make it stand out are than it comes pre-load with Linux - in flash, and that, out of the box, it acts as a sever, so that it is straightforward to connect by WiFi. Check out the spec. There is also a new version which is pricey at $49 but which has 8GB flash, as opposed to relying on micro SD card.

\n\n

The Omega 2 range is not much bigger than a large SIM card.

\n\n

\"enter

\n" }, { "Id": "945", "CreationDate": "2017-02-03T17:16:09.860", "Body": "

Sometimes, my Google Home gets a little confused and completely misinterprets what I've asked (and it seems other people get the same problem quite often).

\n\n

For example, the assistant thinks Home Alone is \"everything wrong with Deadpool in 15 minutes or less\", just gives up sometimes when things go wrong and can't read calendars correctly.

\n\n

Obviously, when things go wrong, it'd be nice if the developers would fix the issue, so I'd like to know if it's possible to report a specific conversation as incorrect or broken, so that they can sort it out.

\n\n

Is there any way I can alert the Google developers if I come across any bad behaviour, so that they can fix the issue? Is this even necessary at all?

\n", "Title": "How can I report an incorrect response from my Google Home?", "Tags": "|google-home|", "Answer": "

Since no one is taking this one on, I'll take a shot at it. :)

\n\n

Android Google Home App: If you have the Android Google Home app, then follow these steps:

\n\n
    \n
  1. Make sure you're on the same wifi as your Google Home and open the Android app.

  2. \n
  3. Go to the hamburger menu, and go to Help & Feedback > Submit Feedback Report (bottom of screen)

  4. \n
  5. Select the device you've had an issue with

  6. \n
  7. Select your correct e-mail address

  8. \n
  9. Write up your report! Include your e-mail address and relevant keywords.

  10. \n
  11. Check the \"Include screenshot and logs\" box.

  12. \n
  13. Submit!

  14. \n
\n\n

Iphone & Ipad: (I'll just quote from the same Google Support page):

\n\n
\n \n
\n\n

But... there's more.

\n\n

Of course, Google Home being what it is, you ought to be able to send feedback via voice, oughtn't you? The answer is yes. You should be able to... and you can! Simply say the following:

\n\n
Ok, Google  [or, Hey, Google]\n
\n\n

then

\n\n
Send Feedback.\n
\n\n

And now you can fire away your bug or feedback report by voice.

\n" }, { "Id": "950", "CreationDate": "2017-02-04T14:47:13.990", "Body": "

I recently read an article in The Register, Don't let cloud slurp all your data. Chew it on the edge, says HPE:

\n\n
\n

The basic pitch is that HPE's gear can do compute on your shop floor without taking up large chunks of your floorspace, meaning you don't need to splash out on collecting and moving data back and forth, or spend megabucks on cloud services.

\n
\n\n

However, edge computing (which essentially just seems like a buzzword for locally processing data!) seems to have a few problems with it to me. Wikipedia quote a source which says, \"Cloud computing is cheaper because of economics of scale\", so surely edge computing misses out on the benefits you get from massive-scale computing in data centres?

\n\n

Why would edge computing be useful in some cases? Is it only really useful in cases where huge amounts of data need to be sent, where it would be impractical to send it over the Internet?

\n", "Title": "Which problems with cloud computing make edge computing useful?", "Tags": "|edge-computing|cloud-computing|", "Answer": "

Remember, you are always processing data at the edge, even if it's not obvious. The choice to sample data at a particular frequency, whether that is 1Hz or 100Khz (particularly with analogue data), is a form of edge processing. Very few scenarios will transmit data at the maximum clock cycle of the processor.

\n\n

Some scenarios where explicit edge processing is useful

\n\n\n\n

When considering material about edge processing, remember to consider the source. Existing IT vendors that haven't cracked the cloud market (IBM, HP, Cisco) are worried that IoT, as the next big thing, bypasses them altogether. As a result they will aggressively market edge processing. Indeed, 'fog computing' is a term created by Cisco to have a 'cloud' closer to the ground (or something). Obviously cloud vendors are marketing the reverse, for their own bottom line.

\n" }, { "Id": "955", "CreationDate": "2017-02-04T20:49:22.473", "Body": "

I am playing around with MQTT CONNECT messages. I have a simple C program which opens a TCP/IP socket towards an Mosquitto broker running on my laptop, sends an MQTT CONNECT message, (normally) receives the 4 byte long CONNACK reply then closes the socket and exits the program.

\n\n

Currently I do not build my own CONNECT message but use one from a Wireshark capture.

\n\n

\"Wireshark

\n\n

It can be exported as a C array, the MQTT part:

\n\n
char packet_bytes[] = {\n  0x10, 0x20, 0x00, 0x06, 0x4d, 0x51, 0x49, 0x73,\n  0x64, 0x70, 0x03, 0x02, 0x00, 0x3c, 0x00, 0x12,\n  0x72, 0x6f, 0x6f, 0x74, 0x2e, 0x31, 0x34, 0x38,\n  0x35, 0x38, 0x39, 0x30, 0x38, 0x35, 0x37, 0x31,\n  0x39, 0x34\n};\n
\n\n

Using this unmodified array everything works just fine, here is the broker's output:

\n\n
1486237905: New connection from 192.168.1.2 on port 1883.\n1486237905: New client connected from 192.168.1.2 as root.1485890857194 (c1, k60).\n1486237905: Sending CONNACK to root.1485890857194 (0, 0)\n1486237905: Socket error on client root.1485890857194, disconnecting.\n
\n\n
\n\n

The problems start when I want to modify the Client ID in the message. My simplest attempt is chopping the last character 4 from the end of the ID.

\n\n

I think this requires three modifications in the actual code.

\n\n
    \n
  1. Deleting the last byte from the array, the 0x34.
  2. \n
  3. Decrementing the the Remaining Length field (2nd byte in the array) in the message. So from 32 to 31, 0x20 --> 0x1F.
  4. \n
  5. Decrementing the number of bytes parameter of the send function. From 34 to 33. (+2 because of the Header Flags and Remaining Length fields)
  6. \n
\n\n
\n\n
char packet_bytes[] = {\n  0x10, 0x1F, 0x00, 0x06, 0x4d, 0x51, 0x49, 0x73,\n  0x64, 0x70, 0x03, 0x02, 0x00, 0x3c, 0x00, 0x12,\n  0x72, 0x6f, 0x6f, 0x74, 0x2e, 0x31, 0x34, 0x38,\n  0x35, 0x38, 0x39, 0x30, 0x38, 0x35, 0x37, 0x31,\n  0x39\n};\n\n\nif( send(s , packet_bytes , 33, 0) < 0)\n{\n    puts(\"Send failed\");\n    return 1;\n}\n
\n\n

It does not work, here is the broker's output:

\n\n
1486239491: New connection from 192.168.1.2 on port 1883.\n1486239491: Socket error on client <unknown>, disconnecting.\n
\n\n
\n\n

I know that the Remaining Length field requires special enconding but not under 128.

\n\n

\"Remaining

\n\n

What did I miss here, what should I modify beside the Remaining Length field?

\n", "Title": "How to modify only the Client ID in an MQTT CONNECT message?", "Tags": "|mqtt|mosquitto|", "Answer": "

I managed to find my mistake. Mistakenly I assumed that the Client ID is a fix field but it is only part of the Payload of the message thus a length-prefix is needed. From the specifications:

\n\n
\n

The payload of the CONNECT Packet contains one or more length-prefixed\n fields, whose presence is determined by the flags in the variable\n header. These fields, if present, MUST appear in the order Client\n Identifier, Will Topic, Will Message, User Name, Password

\n
\n\n

So one more byte should be decremented in the message. The correct steps:

\n\n
    \n
  1. Deleting the last byte from the array, the 0x34.
  2. \n
  3. Decrementing the the Remaining Length field (2nd byte in the array) in the message. So from 32 to 31, 0x20 --> 0x1F.
  4. \n
  5. Decrementing the length-prefix byte of the Client ID in the payload. In my case it is the 16th byte (counting from 1) 0x12 ---> 0x11.
  6. \n
  7. Decrementing the number of bytes parameter of the send function. From 34 to 33. (+2 because of the Header Flags and Remaining Length fields)
  8. \n
\n\n

After this additional step the broker sent back the CONNACK message.

\n" }, { "Id": "956", "CreationDate": "2017-02-05T11:36:21.817", "Body": "

I have a previous question and to get closer to a solution I want to enable Mosquitto broker logging on Windows 7.

\n

Originally I have started the broker manually as follows:

\n\n
mosquitto -p 1883 -v\n
\n

-v means verbose console logging. But this does not provide enough information, only the following line in case of my problem:

\n
1486293976: Socket error on client <unknown>, disconnecting.\n
\n
\n

I have tried doing what is described in this answer. Here is the config file's logging part:

\n
# Note that if the broker is running as a Windows service it will default to\n# "log_dest none" and neither stdout nor stderr logging is available.\n# Use "log_dest none" if you wish to disable logging.\nlog_dest stdout\n\n# If using syslog logging (not on Windows), messages will be logged to the\n# "daemon" facility by default. Use the log_facility option to choose which of\n# local0 to local7 to log to instead. The option value should be an integer\n# value, e.g. "log_facility 5" to use local5.\n#log_facility\n\n# Types of messages to log. Use multiple log_type lines for logging\n# multiple types of messages.\n# Possible types are: debug, error, warning, notice, information, \n# none, subscribe, unsubscribe, websockets, all.\n# Note that debug type messages are for decoding the incoming/outgoing\n# network packets. They are not logged in "topics".\nlog_type error\nlog_type warning\nlog_type notice\nlog_type information\n\n# Change the websockets logging level. This is a global option, it is not\n# possible to set per listener. This is an integer that is interpreted by\n# libwebsockets as a bit mask for its lws_log_levels enum. See the\n# libwebsockets documentation for more details. "log_type websockets" must also\n# be enabled.\n#websockets_log_level 0\n\n# If set to true, client connection and disconnection messages will be included\n# in the log.\nconnection_messages true\n\n# If set to true, add a timestamp value to each log message.\nlog_timestamp true\n
\n

In this case I have started the broker as follows:

\n
mosquitto -p 1883\n
\n

-v option would override the config file with the default config so I have left that out. But I see no logging on the console.

\n
\n

Instead of stdout I have tried to log into a file, and changed the configuration as follows:

\n
log_dest file d:\\mosquitto.txt\n
\n

I have created the file manually and started the broker in the same way but no avail.

\n
\n

I do not get any log message if I do not use the -v option.\nHow should it be done properly?

\n", "Title": "How to enable detailed logging of Mosquitto broker on Windows 7?", "Tags": "|mqtt|mosquitto|microsoft-windows|", "Answer": "

I found this a while back but I'm unable to attribute to the original author. Works great for existing logs, but can't 'tail -f' with this solution:

\n\n

sudo cat /var/log/mosquitto/mosquitto.log | grep -v datab|perl -pe 's/(\\d+)/localtime($1)/e'

\n\n

Using this on linux, but should work on WSL/cygwin.

\n" }, { "Id": "958", "CreationDate": "2017-02-05T14:52:41.057", "Body": "

When you get started with automating your home, you quickly find out that many devices need a hub or bridge to function correctly. For example, the Philips Hue bulbs need a bridge, August Smart Locks need a different bridge, and some people also buy hubs like the SmartThings hub or the Vera hub.

\n\n

There are lots of people who don't seem to be sure whether they need a hub when they start automating their home, but often the explanations aren't clear.

\n\n

Why might I need a hub or bridge instead of just connecting all my devices straight to my home network?

\n\n

For example, if I have some Philips Hue bulbs, an Amazon Echo and an ecobee3, how can I figure out if I need a hub? Is there a methodology that will help me to determine which hub is best?

\n", "Title": "Why do I need hubs for some devices when automating my home?", "Tags": "|smart-home|", "Answer": "

Technically, different devices will communicate using protocols that are not internet-based, protocols that are proprietary, and tied together with different administration tools.

\n\n

The reality is, particularly in consumer IoT, that you are witnessing an important battle between vendors that have an interest in dominating the consumer IoT (or home automation) market. Vendors of chipsets will promote their chosen radio network, vendors of products will promote their management tools, and others will position themselves as the one hub to rule them all. People, and minnow tech companies, fed up with this try and promote standards, which often get co-opted by major players. You will see this where every 'standard' has an 'alliance' page, which is the source of their ability to produce a standard.

\n\n

There is no single 'hub', as there is no single vendor. People need to commit to an ecosystem (or two) and hope for the best. In much the same way that people will commit to the Apple or Android mobile ecosystem, they will commit to a 'smart home' ecosystem. Samsung wants to be the 'Apple of IoT', so does Philips, so do many others.

\n\n

While there are efforts to standardise, because of the frustrations that you are questioning, it is unlikely that any standard will emerge soon, and even more unlikely that it will be a truly open standard.

\n\n

This answer may seem cynical, and more opinion than fact, but IoT is a new market with huge rewards for vendors that dominate. Current users of IoT are early adopters that will be frustrated while the battle of technology giants is played out. Understanding the current state of the market is important to adjust your expectations while evaluating the technology.

\n" }, { "Id": "966", "CreationDate": "2017-02-06T13:02:45.543", "Body": "

I have used MQTT to connect all my ESP8266 units but I have a general question regarding topics. According to www.hivemq.com:

\n\n\n\n

I have pretty much applied to this but I use some special characters (% and \u00b0 for example). For example I use:

\n\n

Garage_Sensor_001/Temperature/\u00b0C value

\n\n

Livingroom_HID_002/Switch_001/Action value

\n\n

Bedroom_Sensor_001/Motion_001/Detection value

\n\n

I.e.

\n\n

PLACEMENT_OF_NODE/TYPE_OF_SENSOR_UNIT_OR_ACTION/FUNDAMENTAL_UNIT_OF_VALUE_IF_ANY

\n\n

So my question is: Should I use special characters when naming MQTT topics?

\n", "Title": "Should I use special characters in MQTT topics?", "Tags": "|mqtt|", "Answer": "
\n

Use only ASCII characters.

\n
\n

ASCII format for Network Interchange specifies that the ASCII range extends from hexadecimal 0 to 7F, i.e., 128 characters. I think % should be supported but not \u00b0.

\n" }, { "Id": "970", "CreationDate": "2017-02-06T23:44:25.123", "Body": "

Most of the commands I send to SIM800C module returns ERROR message to me.

\n\n

For example:

\n\n

If I sent AT+CSQ, it returns an expected response.

\n\n

One of the basics commands that doesn't worked for me is the AT+CPIN? PIN checking command.

\n\n

In the datasheet, I don't even can find the possible cause for this error.

\n\n

Another information I have:

\n\n

Among many explanations for the problem, I found one, and I don't remember which was, that said to send a command to expand ERROR in details. As result from AT+CPIN?, I received a error that corresponds to \"no card inserted\" and I don't know why. The SIM card works fine, I have tested on my phone.

\n\n

The SIM card is not detected by module. I measured the voltage in the card bus and I have 0 volts. I don't know it is the cause or the consequence for bad functioning, not even it is related to this main problem of this question.

\n\n

This is my circuit:

\n\n

\"SIM800

\n", "Title": "Receiving \"ERROR\" message from SIM800C module", "Tags": "|hardware|mobile-data|gsm|", "Answer": "

The problem was bad contacting between SIM card contact block and the board. The problem was difficult to find because when I touched the terminal for measurement it makes pressure to the board, the contact happens and the problem couldn't be observed.

\n\n

I made a check list for resolution:

\n\n\n\n

After eliminated most of all those questions the only possibility was the last one. Fact! I touched GND in the card holder very very delicately and the problem was there, a Heisenbug.

\n\n

I hope this answer and checklist help a lot, because information about this kind of problem is very difficult to find.

\n" }, { "Id": "980", "CreationDate": "2017-02-07T17:18:34.337", "Body": "

Apparently, lots of Google Home speakers were activated by Google's Super Bowl ad:

\n
\n

Early during tonight\u2019s game, Google\u2019s ad for the Google Home aired on millions of TVs. We\u2019ve actually seen the ad before: loving families at home meeting, hugging, and being welcomed by the Google Assistant. Someone says \u201cOK Google,\u201d and those familiar, colorful lights pop up.

\n

But then my Google Home perked up, confused. \u201cSorry,\u201d it said. \u201cSomething went wrong.\u201d I laughed, because that wasn\u2019t supposed to happen. I wasn\u2019t the only one.

\n
\n

I recently asked about how you can stop Alexa from being activated by TV presenters and why Amazon Echos don't respond to TV ads, but this article makes me wonder if Google thought to add the same sort of protections (I suspect the answer is no, but couldn't find any sources to prove it).

\n

Does the Google Home use any sort of frequency detection or signalling to stop advertisements from triggering the device?

\n", "Title": "Does the Google Home have any protection against TV advertisements triggering it?", "Tags": "|google-home|", "Answer": "

It looks like probably not. I have searched around quite a bit, and I have found a couple pieces of evidence:

\n

1. The article you referenced.

\n

If you read the article again, you will see that the Google Home did pick up the signal - and interpreted it as a wake up call. Probably the volume and other background noise prevented the signal from being understood, hence, the "Sorry, something went wrong."

\n

2. Google Assistant has retired Personalized Voice Recognition.

\n

It appears that 2015, Personalized Voice Recognition has been retired. This means that Google is basing their recognition off of a larger database of various different accents and dialects, not off of a personalized recognition of your voice in particular. There is no reason why the voice of the guy on TV should be any different.

\n
\n

So as far as I can tell, the answer is,

\n

No, for the moment, Google Home can be triggered by advertisements.

\n" }, { "Id": "984", "CreationDate": "2017-02-08T03:50:01.500", "Body": "

Is there a solution for supporting \"find my phone\" from either Amazon Alexa or Google Assistant, using the native Android Device Manager? Or does Google even provide a web services API that could be used to develop this?

\n\n

My Google Home does not (yet) support this, which is a bit surprising, given the integrated Google ecosystem: \"Sorry, locate device is not yet supported.\"

\n\n

Alexa depends upon a 3rd-party integration, with TrackR and IFTTT seeming to be the most popular ones. TrackR requires an additional app - and initial experience shows it to be a battery hog. The only IFTTT applets I've found rely on calling the phone, which won't help much if it is on silent.

\n\n

Using the native Android Device Manager support would seem to be the most ideal here - as it wouldn't require any additional software, and can ring the device even if it's set to silent or vibrate.

\n", "Title": "Alexa or Google Assistant \"Find My Phone\" integration with Android Device Manager?", "Tags": "|alexa|amazon-echo|google-assistant|", "Answer": "

Although it's not ideal, you could use two IFTTT recipes to allow you to call your phone: one to set the ringtone volume to maximum, and one to actually call the phone. It's a little more involved than I'd like, but since IFTTT offers no chaining of actions, it's the best you'll get.

\n\n

Recipe 1: If Amazon Alexa > Say a specific phrase Then Android Device > Set ringtone volume.

\n\n

You could set the specific phrase to \"set my phone volume to maximum\", so you simply have to announce \"Alexa, set my phone volume to maximum\" when you want to do that. Of course, for the Google Home, replace the Alexa trigger with the Google Home trigger.

\n\n

Recipe 2: If Amazon Alexa > Say a specific phrase Then Phone Call > Call my phone.

\n\n

As you'd expect, this will call your phone, hopefully making quite a loud noise now the volume is set to maximum. IFTTT only support US phone numbers at the minute though; if you're in another country, you may have to use a different method (maybe a notification or email).

\n" }, { "Id": "988", "CreationDate": "2017-02-08T17:29:35.180", "Body": "

I am in the process of creating a simple and cheap Wi-Fi PCB design that can send and receive messages to an app (through a cloud of course). I intend to attach this PCB to a fish tank temperature gauge and a fish tank heating tube.

\n\n

Basically here are the only forms of communication between the PCB and my app:

\n\n
    \n
  1. Send temperature readings when requested by app user

  2. \n
  3. Receives requests to change temperature high or lower

  4. \n
  5. Turn heating device on/off

  6. \n
  7. I need to configure the PCB so that it can communicate with a cloud service (installing SDKs, frameworks, and program logic to handle sending/received messages through cloud)

  8. \n
\n\n

What components do I need in my PCB that allows me to achieve these tasks? And what is the bare minimum flash and processing that can handle those tasks, or do I not even need processor chip or flash memory? I'm a beginner with PCBs.

\n", "Title": "How can I design a simple Internet-connected controller for a sensor/heater?", "Tags": "|hardware|wifi|", "Answer": "

You do not want a WiFi antenna on your PCB. If you take this approach, you will need to do some RF layout and submit for type-approval/FCC testing. The beat approach to this sort of problem is to use a WiFi module. Here is a page discussing the esp8266 and RTL8710 modules to give you an idea of what is out there.

\n\n

These modules are (probably) designed so you can use them without having to repeat any regulatory testing. The on-board MCU has a small amount of excess processing power (above what is required to manage the wireless communication) and you can use this to interface to your sensors.

\n\n

If your sensor is analogue, you will need some sort of ADC. Otherwise, find a sensor with an SPI (or similar) digital interface. Your PCB will need to handle connector issues, power supply, indicators and display, that sort of thing.

\n" }, { "Id": "990", "CreationDate": "2017-02-08T18:43:24.330", "Body": "

I was recently reading about Chirp, a data-over-audio protocol that sends information through (rather literally) chirping sounds either in audible or ultrasound frequencies.

\n

One of the use cases suggested on the website is for transmitting game data between devices, and The Register have an article on Chirp being used in nuclear power stations because RF transmissions (e.g. Wi-Fi) are not allowed due to the risk of interference:

\n
\n

As for the nuclear power stations, Chirp's tech has found a useful niche in IoT sensor applications where traditional RF networking cannot be used. Nuclear power stations have an absolute ban on RF over fears of interference \u2013 thereby ruling out Wi-Fi, Bluetooth and all the usual go-to wireless networking technologies \u2013 and when EDF wanted to monitor equipment in its turbine halls the usual shielded cable was seen as too costly and bulky.

\n

"They've got machine plant they want to monitor, diagnose and talk to," said Nesfield. "Chirp is being used in those contexts because it's not RF and doesn't interfere."

\n
\n

In a lot of the cases they've suggested, I imagine there must be quite a lot of noise which could interfere with the chirps - how does Chirp stop other sounds from interfering with the data that they want to send?

\n", "Title": "How can Chirp transmit data over sound without getting interference?", "Tags": "|protocols|", "Answer": "

This page describes the chirp protocol:

\n\n
\n

An entire chirp is a sequence of 20 pure tones of 87.2ms each. The\n first 2 tones are a common \u2018front door\u2019 pair \u2013 \"hj\" \u2013 to indicate to a\n device that the following tones are a chirp shortcode; the next 10\n tones represent the 10-character payload. The final 8 tones are\n Reed-Solomon error correction characters.

\n
\n\n

This doesn't describe the error rejection process, but it is likely to be similar to the way that DTMF protocols rely on a consistent band-pass channel. Each tone needs to fit in a time window, with amplitude and frequency constraints. Making each tone relatively long allows some improvement in the signal/noise ratio, DSP is capable of matching a audio stream against all possible legal tone sequences to recognise a potential signal within the noise, and there is a good dose of error correction coding too.

\n" }, { "Id": "993", "CreationDate": "2017-02-09T10:27:14.883", "Body": "

If I wanted to use my phone to control a simple Wi-Fi connected device that just turns the light on or off, or a simple temperature gauge, why don't I just communicate directly with the device instead of going through a cloud? No data persistence or heavy processing or any other fancy stuff to deal with.

\n\n

Is there anything stopping me from designing such a simple IoT product and just start mass producing it and selling it? Seems cheaper to cut out the middle man and not have to deal with a cloud's cost/message fees.

\n", "Title": "Is it possible to commercially sell a Wi-Fi IoT product that DOESN'T use cloud?", "Tags": "|wifi|system-architecture|", "Answer": "

If you only want control inside the home sure it is possible.

\n\n

The problem is if you want to offer control from outside the home things get difficult. Neither the client or the server are likely to have a static IP, there are likely to be firewalls and/or NATs in the way.

\n\n

It is possible for the user to set up port forwarding/exceptions in their router/firewall and set up some kind of dynamic DNS to track their dynamic IP and point their client at the dynamic DNS entry but it takes a technical user to do it and it creates security issues.

\n\n

Having a server in a known location on the public Internet is the easiest way to make sure your things can communicate with each other regardless of dynamic IPs, NATS, egress only firewalls etc. There are still some security issues but they are reduced as you can enforce security policies on the server which you can more easily monitor and update.

\n\n

Ipv6 loses the NAT but dynamic IPs and egress only firewalls are still likely to be common.

\n" }, { "Id": "1005", "CreationDate": "2017-02-10T14:24:22.947", "Body": "

Can anyone recommend a simple micro-controller with some I/O (<8) that can be powered using PoE, something cheap like Raspberri Pi Zero. The requirements are:

\n\n\n\n

The thing is that if I need a power adapter per each micro controller distributed around the house I will require a lot of budget. Having a set if chips with PoE I avoid occupying the power outlets and I have safe communication at the same time.

\n\n

Basically what I want is to through some Ethernet cables and add some sensors and actuator without having to set up a full Arduino in each end-point. And not having to change the battery every year. And also I don't want to spend extra money buying adfruits and shields.

\n\n

Another alternative that fulfill somehow my requirements, although it is not what I was thinking of, it is to install a commercial Wi-Fi power outlet. \nI find it bit expensive for the amount of units I need.

\n\n

Here is one example from Amazon.

\n\n\n\n

Of course, the fun will be to develop the application by myself

\n\n

\"Orvibo

\n", "Title": "Cheap IoT microcontroller with PoE", "Tags": "|hardware|microcontrollers|ethernet|", "Answer": "

I list microcontrollers up to 30 EUR here. I'll keep looking and update this if I find something interesting. A really good solution would be something below 10 EUR, but I haven't found anything like that.

\n\n\n" }, { "Id": "1006", "CreationDate": "2017-02-10T16:01:06.320", "Body": "

We're working on AWS-IoT using an STM32 microcontroller.

\n\n

Till today, we were writing the certificates to the flash and locking the flash from external reading. As the application code increases, we're getting lesser space on the flash so we were planning to move the certificate externally on an SD card / EEPROM and read whenever it was needed before connecting to AWS-IoT.

\n\n

Notes:

\n\n\n\n

If I detect a device is stolen/rogue we deactivate the key from the server.

\n\n

What can an exploiter do with the certificates (RootCA, server key, client key)?

\n\n

Is it a bad practice to keep certificates for such usecase on an external storage which can be accessed by an exploiter?

\n", "Title": "Is it a bad practice to keep certificates on external memory?", "Tags": "|security|mqtt|aws-iot|", "Answer": "

You mention \u201ccertificates\u201d, but from context, I think you're referring to two different things.

\n\n\n\n

The good news is that this threat analysis is actually not very relevant. You do not need to sacrifice any security! (At least not confidentiality and authenticity properties \u2014\u00a0if you store stuff externally, then availability takes a hit, because that's one piece of the system that could go missing.)

\n\n

As long as you have at least 128 bits of storage that you can write to at least once, which you have and more, you can implement a secure remote storage solution. Use the limited-space on-device storage to store a secret key. This secret key must be unique per device; the STM32 has a hardware RNG, so you can generate it on the device during first boot. If your device didn't have a hardware RNG, you could generate the key in a secure off-device location and inject it onto the device.

\n\n

With this key, use authenticated encryption for things that you store off the device. When you want to read some data from external storage, load it, decrypt-and-verify it. When you want to write some data to external storage, encrypt-and-sign it. This guarantees that the data is as confidential and authentic as the data in the internal storage.

\n\n

Authenticated encryption is enough to guarantee the confidentiality and authenticity of the data, but it doesn't quite guarantee its integrity.

\n\n\n\n

To avoid bricking a device in case the external storage is damaged or otherwise lost, in the limited space you have space on internal storage, you should give priority to whatever is needed to reset the device to a \u201cgood\u201d state, e.g. a factory reset. The second priority will be performance considerations.

\n" }, { "Id": "1010", "CreationDate": "2017-02-10T18:22:27.323", "Body": "

I'm working with an IoT platform in FPGA for evaluation and prototyping. I need to provide support for TLS, and for that I need an entropy source.

\n\n

I understand that true random noise sources are quite specialist (if even practical) in FPGA, since the device performance is often pretty good (and hard to find any corner-case parameters), but I can implement a pseudo-random sequence generator without any problems.

\n\n

I only have some standard I/O channels (uart, I2C, etc), nothing that looks like it can provide even much to seed a PRBS - except maybe an audio ADC input. Are there maybe any reliable tricks for generating entropy in an FPGA which I ought to consider?

\n\n

Assuming that I use a PRBS, I can potentially attach an external noise source which I could certainly use as a seed. I'm interested to know how much this would actually add to my TLS implementation. Would this be reliable and secure, or only slightly better than using a fixed pseudo-random sequence? Would I need to keep polling the external noise source for more entropy?

\n\n

It's OK if the entropy source I end up with isn't properly crypto-secure (since this is just for prototyping), but I'd like to understand the cost-quality trade-off.

\n", "Title": "Can I implement a (weak) entropy source in FPGA?", "Tags": "|security|hardware|", "Answer": "

Do you need to? You can implement a cryptographically secure random generator if you have two things: some rewritable secure storage, and an initial seed. That is, it's enough to seed the RNG once, and then save its state and work off the saved state. It isn't ideal, it would be better to mix in entropy periodically, but it's ok, especially for a development protoype.

\n\n

You do need to have rewritable secure storage. If the device only has ROM and non-secure storage, then this approach is not possible. There must be a location where you can store the RNG state in such a way that your adversaries can neither read it nor modify it.

\n\n

The way this works is, when the device boots, it loads the current RNG state, and uses it to generate some enough random bytes for twice the size of the RNG state. Write the first half as the new saved RNG state, and use the second half as the initial RNG state for the current session. With any cryptographically secure PRNG, this yields a cryptographically secure PRNG. Note that it's critical that you don't reuse a stored RNG state, that's why you must write a new independent RNG state before you start using the RNG.

\n\n

The initial entropy injection can happen during manufacturing, or when the device is set up. Usually those things happen with a connection to a PC which can generate the entropy on behalf of the device.

\n" }, { "Id": "1016", "CreationDate": "2017-02-11T20:06:19.907", "Body": "

Sometimes, when developing an Alexa skill and programming the responses from my service, Alexa mispronounces one of the words in my reply, confusing the user.

\n\n

For example, if I wanted Alexa to say a word in a different language (perhaps for a language learning skill), how can I tell Alexa how to pronounce the word correctly, rather than apply English pronunciation rules?

\n\n

This also applies to English words with odd pronunciations; is there a way to dictate to Alexa the correct pronunciation, or replace it with a custom sound that is correct? Do I need to use additional markup or an API call?

\n", "Title": "How can I change Alexa's pronunciation of a specific word in a skill?", "Tags": "|alexa|", "Answer": "

Alexa supports SSML, which is an XML-like markup language for speech. Instead of returning plain text from your service, you can use SSML responses. The <phoneme> tag is what you need in particular:

\n
\n

phoneme

\n

Provides a phonemic/phonetic pronunciation for the contained text. For example, people may pronounce words like \u201cpecan\u201d differently.

\n
\n

For English words (especially US English), Alexa should be able to pronounce any word if you give it the correct phonetic pronunciation:

\n
\n

The following tables list the supported symbols for use with the phoneme tag. These symbols provide full coverage for the sounds of US English. Note that many non-English languages require the use of symbols not included in this list, which are not supported. Using symbols not included in this list is discouraged, as it may result in suboptimal speech synthesis.

\n
\n

Quotes from Amazon documentation on SSML.

\n

Here's an example of giving Alexa a specific pronunciation:

\n
<speak>\n    <phoneme alphabet="ipa" ph="h\u025b\u02c8l\u0259\u028a\u032f">Hello</phoneme>.\n    <phoneme alphabet="ipa" ph="b\u0254\u0303.\u02c8\u0292u\u0281">Bonjour</phoneme>.\n</speak> \n
\n

The <phoneme> tag supports the IPA and X-SAMPA phonetic alphabets. You can typically find IPA spellings for any word on Wiktionary or through Google.

\n

For longer messages, it may be best to use the <audio> tag and record a custom voice:

\n
\n

The audio tag lets you provide the URL for an MP3 file that the Alexa service can play while rendering a response. You can use this to embed short, pre-recorded audio within your service\u2019s response. For example, you could include sound effects alongside your text-to-speech responses, or provide responses using a voice associated with your brand.

\n
\n

Quoted from Amazon documentation on <audio>.

\n" }, { "Id": "1022", "CreationDate": "2017-02-13T13:42:30.517", "Body": "

I've been reading about the Roost Smart Battery. Essentially, it's a battery you can install in your smoke detector which is supposed to notify you on your phone if your smoke detector goes off. It also gives you the capability of silencing your smoke detector from your phone.

\n\n

According to The Wire Cutter:

\n\n
\n

...Roost will automatically notify someone else about the alarm in your home when it\u2019s triggered, without giving that person control over all your other smart-home devices (as is the case with Nest\u2019s version of this feature). But unlike the [Nest] Protect, the Smart Battery doesn\u2019t give you voice alerts, wireless interconnectivity, integrations with smart-home devices, or self-testing sensors [...]

\n
\n\n

According to my understand (maybe flawed) of the word interconnectivity, it means something along the lines of this definition from Wikipedia:

\n\n
\n

Interconnectivity refers to the state or quality of being connected together, or to the potential to connect in an easy and effective way

\n
\n\n

If the Smart Battery is not in the state of being connected together with your phone, how would they communicate? There is obviously a flawed premise. I assume the flawed premise is my definition of the term interconnectivity. What do they mean by saying that the Roost Smart Battery does not give wireless interconnectivity?

\n", "Title": "How does the Roost Smart Battery connect?", "Tags": "|smart-home|roost|", "Answer": "

From the context, I believe it is referring to interconnectivity with other alarms in the house. From earlier in the article:

\n\n
\n

Most important, an alarm should connect wirelessly with other alarms in the home, or come in a hardwired version that you can wire to other alarms, so that when one alarm senses danger all alarms in the house will sound. This is a crucial safety feature that can save you precious seconds in evacuating your home. Also, many states now require interconnected alarms for new construction, so if you do a significant remodel in your home, you don\u2019t want to end up having to buy a different brand of alarm for the new area (most brands don\u2019t play well with one another when it comes to interconnected alerts).

\n
\n\n

The author of that article seems to use interconnected to refer to the alarms connecting together so that they all sound together in case of a fire. The Roost Smart Battery does not do this, because it just works by listening to detect your alarm going off with a piezoelectric sensor, according to TechHive:

\n\n
\n

The detector relies on the high-pitched sound of the alarm to deform a piezo-electric sensor to trigger its own alarm. I intentionally installed the battery in the smoke detector in my master bedroom because its siren is defective\u2014it makes more of a growl than a high-pitched squeal. But the Roost still went into an alarm state, perhaps after picking up the noise of the detector in the hallway.

\n
\n\n

Your smoke alarm is still 'dumb' and can't share its state with other alarms, and the smart battery has no way of causing your alarm to trigger, either.

\n\n

The Roost Smart Battery is definitely connecting wirelessly with your phone, though. The title of their website is \"Roost Wi-Fi battery for smoke and CO alarms\", so it's plainly obvious that the device does connect via Wi-Fi to your phone.

\n" }, { "Id": "1024", "CreationDate": "2017-02-13T17:21:19.743", "Body": "

I'm looking for a gateway to pass from a 6LoWPAN over 802.15.4 network to 3G. Are there any on the market, or do I have to build one for myself based on Arduino or Raspberry Pi?

\n", "Title": "Are there any 6LoWPAN to 3G gateways available on the market?", "Tags": "|6lowpan|", "Answer": "

I think you will find modules without too much trouble, there have been chips around for over a year that I found (and there is a module on Kickstarter which looks like a simple small production run sales testing exercise).

\n\n

A 6LoWPAN to 3G gateway is probably a bit of an esoteric use case - any you can find will be built-in to an end product, so it might be cheaper to assemble a unit from modules (or re-purpose an old phone maybe depending on the scenario).

\n\n

Just to elaborate on the 'how' part, it seems to be common now for these radio interfaces to be integrated with a reasonably powerful MCU which handles the communication stack. Not just the link-layer, but some of the security aspects of the protocol too. So this would easily be able to interface to a GSM module through a UART, without needing a further processing device. Building the radio stand-alone would actually be harder than having it tightly integrated with a control processor. You should also be wary of trying to use the asic directly on your own board - using a module removes the challenge of RF layout and type approval.

\n" }, { "Id": "1040", "CreationDate": "2017-02-14T19:28:34.000", "Body": "

I was reading this article, "Five Building Blocks of Self-Powered Wireless Sensor Nodes" (shared on IoT Meta) about energy harvesting in IoT.

\n

It lists a couple of harvestable energy sources for example:

\n\n

I am planning to implement some prototype, harvesting devices to explore the possibilities. To get some experimental data about how much energy can be harvested in my home. Currently I am trying to identify the possible spots in my home where these energy harvesters could be placed and efficiently used.

\n

What I though of:

\n\n

What other parts of the apartment should I place further harvesters for experimenting?

\n

So for example can a microwave oven be a possible source or other frequently used kitchen appliances?

\n

I have an electric boiler above my bathroom (inside the apartment), so maybe the hot water pipes running down to the bathroom are good sources as well.

\n

What else could be there? Are there any more possibilities to harvest vibration energy in a home?

\n", "Title": "Where can I harvest energy in my home to power my wireless sensors?", "Tags": "|smart-home|sensors|power-consumption|sustainability|", "Answer": "

A slightly obscure answer is 'from the sensor'. By making the hardware event driven you can potentially improve the standby power by such a large magnitude that a battery can be regarded as a non-replaceable part of your hardware.

\n\n

See this research from the University of Bristol where they have optimised a switching element for pA leakage. Although I've seen this in passing before, I didn't appreciate the practical value till watching a video which described the scope for using the device.

\n" }, { "Id": "1048", "CreationDate": "2017-02-15T07:47:53.037", "Body": "

I have been working on a project recently and I am using Sensor SPM SLD 723\nto read variations in vibrations.

\n\n

The output I received from the sensor is only in volts, how can I convert this voltage into Frequency (Hz)?

\n\n

Edit : Actual data received from Device

\n\n

\"list

\n", "Title": "How to calculate frequency with the voltage received from a vibration sensor?", "Tags": "|sensors|", "Answer": "

Key here is the datasheet as linked in Jimmy Westberg's answer. The sensor will output:

\n\n
\n

The 4-20 mA vibration transmitters are piezo-electric accelerometers of compression type and provide a 4-20 mA output signal proportional to the true RMS value of vibration velocity.

\n
\n\n

So the output of this sensor is a current signal between 4 mA and 20 mA (not a voltage) that is proportional to the RMS value of vibration velocity. To read this sensors output the current will have to be converted to a voltage using a transimpedance amplifier (current-to-voltage converter) or measuring the voltage drop acros a well defined series resistor.

\n\n

However as the sensor output is the true RMS value of vibrations in the specified frequency range (2..10,000 Hz) it is not possible to obtain the frequency (or to be more precise the wide frequency band) of the vibration with this sensor. To detect the frequency spectrum a measurement of the time waveform of the vibration amplitudes would be necessary.

\n\n

This booklet about vibration measurement gives some more insight.

\n\n

The RMS value is typically used in quantifying the vibration level:

\n\n
\n

The RMS value is the most relevant measure of amplitude because it both takes the time history of the wave into account and gives an amplitude value which is directly related to the energy content, and therefore the destructive abilities of the vibration.

\n
\n\n

The purpose of this sensor seems to be for monitoring of machinery where the actual time waveform of the vibration is of little interest. A single value (the RMS value) is sufficient to monitor the operation of the machine against a threshold value. It significantly simplifies measurement.

\n\n
\n

Experience has shown that the overall RMS value of vibration velocity measured over the range 10 to 1000 Hz gives the best indication of a vibration's severity. A probable explanation is that a given velocity level corresponds to a given energy level so that vibration at low and high frequencies are equally weighted from a vibration energy point of view. In practice many machines have a reasonably flat velocity spectrum.

\n
\n" }, { "Id": "1049", "CreationDate": "2017-02-15T08:13:44.340", "Body": "

Bluetooth beacons for localizing lost objects are beginning to spread. You can locate them using an app on your smartphone (yes, and create an account, share on facebook your objects, ...) with a closer/further logic.

\n\n

I tried one but one day my keys were in fact in my car which was out my house so detecting the beacon from my desk wouldn't work.

\n\n

I was wondering if Bluetooth is such a good protocol for object location at home, especially for multiple obstacles and outdoor. Other possible networks might be:

\n\n\n\n

So should we stay on BT beacons or can other protocols be more reliable especially for home usage?

\n", "Title": "Is Bluetooth a good enough protocol for \"lost my keys\" beacons?", "Tags": "|wifi|protocols|bluetooth-low-energy|lora|beacons|", "Answer": "

For longer distances, I would recommend you look for a device that uses radio waves instead of Bluetooth but a word of warning, they are a bit bulkier and I'm not 100% if they make them specifically for keys but I do know a lot of the newer pet tracker are utilizing radio wave technology such was \"Findster\" pet tracker.

\n\n

If you insist on using something that's Bluetooth based then I would use a bluetooth tracker that has a crowdsourcing feature such as Raven Key Finder which is also a low energy Bluetooth tracker but I think it uses Bluetooth 5.0 which might explain the extended range.

\n" }, { "Id": "1051", "CreationDate": "2017-02-15T10:31:13.390", "Body": "

I'm having a but of trouble making sure my project really is feasible.

\n\n

What I want to archive:\nControl my Senseo coffee machine via internet. It boils down to simply controlling 2 buttons.

\n\n

First easy solution:

\n\n
    \n
  1. Setup a NodeJS server on my Raspberry Pi.
  2. \n
  3. I plug my Raspberry GPIOs to 2 transistors, to control the coffee machine buttons
  4. \n
  5. I can control the Raspberry GPIO directly in Javascript. For example calling http://myraspberrypi.com/makemeacoffee activates the GPIO, activates the buttons, and the coffee flows
  6. \n
\n\n

But: I don't want to plug my raspberry to my coffee machine (I need the Pi for other purposes), and I think that decoupling the web server and the controller itself is a good idea. If tomorrow I want to monitor the temperature of my bathroom, or control a second coffee machine (using another ESP8266) I want be able to do it without rethinking the whole thing.

\n\n

What I need to archive that is an ESP8266 with NodeMCU to use it as a Wi-Fi headless controller (see this link). Raspberry GPIO are no longer used (that's the point). There is only a NodeJS web server on the Pi.

\n\n

Here is a quick sketch of the architecture: \n\"is

\n\n

Let me clarify the role of the main components:

\n\n\n\n

Is this architecture feasible?\nIs this architecture flexible?

\n\n
\n\n

EDIT, to answer Sean Houlihane (spoiler to keep the post relatively short):

\n\n

I'm not 100% sure about this, but a transistor is seems to be enough. The coffee machine switch is working on low voltage (3,3V) and the ESP8266 won't share the ground with the coffee machine.

\n\n

About the temperature sensing, and the water level control, the Senseo coffee machine has this built-in. \nTypical use : Press the center button to launch the heat up process, choose your coffee size by pressing the \"single\" or \"double\" button. Once the heat up process ends, the coffee starts flowing. If there is not enough water, it ends and a LED blinks.

\n\n

The progress I expect :

\n\n\n", "Title": "Is this architecture feasible and flexible?", "Tags": "|raspberry-pi|microcontrollers|wifi|esp8266|system-architecture|", "Answer": "

The architecture which you propose seems OK. You can build more functionality on top of this, such as authentication and request sanitisation (for example time of day controls, rate limiting, etc) so it's a great example to investigate.

\n\n

There are probably some important details in the implementation which you've not looked into yet. For example, the transistor switch - this might need to be a mosfet, maybe a relay (or solid-state relay) and might have voltage and isolation issues.

\n\n

More things to consider, temperature sensing, water level (careful about contamination) and other features. RGB pod tracking (does this make it a single-use machine, or need a pod-not-replaced alarm?)

\n" }, { "Id": "1054", "CreationDate": "2017-02-15T14:58:53.660", "Body": "

I'd like to connect a button \"dimmer switch\" like this one:

\n\n

\"picture_of_a_rotary_dimmer_ligts_and_a_living_room\"

\n\n

to the IoT. Maybe there's already such a device, I just could not find any.

\n\n

The underlying idea is to set a \"%\" information for each of these devices, transforming it in a sort of manual sensor, and collecting it over the network.

\n\n

As I am just beginning to get the concepts of IoT, my naive expectation is that this kind of sensor would exist, connected and integrated to an IoT enabled Linux based operating system, point from which I would know what to do onward. I just am completely lost about the connection between the electronics and O/S.\nI am willing to store in a database the % value (or \"dim level\"?) of several of these sensors, connected to a given network through a yet-to-determine technology and protocol. Some dashboards would then connect the database and show the information along with some analytics.

\n\n

Either way, any advice on how to proceed would be great!

\n", "Title": "Can a dimmer switch be connected to the IoT?", "Tags": "|smart-home|communication|system-architecture|lighting|", "Answer": "

Yes, there already are such devices.

\n\n

If you are not bound to the rotary dial there are dimmers out there (e.g. Insteon 2477d). Probably there some rotary versions too. The Insteon variant can be controlled via their powerline messages (Spec) or via a hub. According to the Insteon website you could build your own powerline client, hook it up to a Pi, a computer or whatever else you want and communicate with your dimmer switch. It's also dual band. Although the second band is RF and not Wi-Fi and thus not terribly helpful.

\n\n

However as far as hubs go the hub is not terribly expensive and bridges over to Wi-Fi and has a HomeKit and Alexa integration.

\n\n

If you want to connect an existing rotary dimmer to the IoT a bit of electrical engineering will be required.

\n" }, { "Id": "1060", "CreationDate": "2017-02-16T16:04:19.480", "Body": "

There is this thing I noticed in my neighborhood: when Sam is waiting for his not-so-automatic door to open, he stops the traffic for 20 or 30 seconds, due to the street configuration.

\n\n

I want to buy a house in this neighborhood (I now own a flat there), and the seller already installed an automatic door with an infrared remote control, making the sell easier and faster. Please understand here that I am not able (and I do not want to) choose or change the automatic door model, otherwise I would have picked up some pre-equipped IoT-ready model.

\n\n

I do not want to be like Sam and be a pain in the ass to every other driver in this street, and I am looking for any idea to be able to remotely/automatically open the door when getting close (like 150-200m) to my house.

\n\n

You can assume that, once the house \"knows\" I am approaching, it can open the door. What I am looking for is a way to let my house know that I am approaching. Plus, if a car is already parked in my garage (like my wife's car), the door won't open. There is obviously safety-issue there (like a door giving access to my house when no one in front of it), but I do not want to deal with this right now.

\n\n

Thanks for your help of for any idea you can give!

\n", "Title": "Automatically open a garage door when approaching by car?", "Tags": "|sensors|smart-home|", "Answer": "

I'd go for OwnTracks (iOS + Android) which lets your phone send GPS-data over (preferably MQTT) the internet. You may set up this to poll your phone and let a server see if the signal is approaching the house which indicates that the phone is traveling in a car or the best would be to have an (old) phone inside the car at all time which only is used as a locator for the car.

\n\n

Here's the link to the app that I use.

\n\n

You may also want to use the Beacon function within OwnTracks which lets your phone know that it is in the car if it is close to the car Beacon. You find them online here and here.

\n" }, { "Id": "1074", "CreationDate": "2017-02-18T18:15:51.517", "Body": "

According to CNET, the My Friend Cayla doll was banned in Germany because it has been classed as an illegal "hidden espionage device":

\n
\n

If you're considering purchasing a connected toy for your offspring, you might want to think twice.

\n

In Germany, regulators have determined that the My Friend Cayla doll could be up to no good, given its potential to steal information about children who play with it. And they say that German parents whose children are in possession of a Cayla doll should destroy the toy.

\n

The Federal Network Agency said in a press release Friday that it has removed Cayla dolls from the market in Germany and will not look to prosecute parents who have purchased one. It does expect, however, that parents who have bought a doll will assume responsibility for destroying it.

\n

Cayla dolls, which incorporate microphones and ask kids questions about themselves and their parents, are classified as "hidden espionage devices," the possession and selling of which are banned by German law.

\n
\n

I did a bit of research about the toy previously when someone asked about the toy, and although it does seem slightly inappropriate to be sending your children's speech to a cloud server, it's not exactly hidden; the device is advertised as 'smart' and connected to the Internet.

\n

Are there other flaws with the doll that made it illegal (hacking, maybe?), or was it banned simply due to the privacy concerns of sending data to the manufacturer?

\n", "Title": "Why was the Internet-connected My Friend Cayla doll banned as a \"hidden espionage device\"?", "Tags": "|privacy|smart-toys|", "Answer": "

One of the key issues with My Friend Cayla is that the audio files it sends are not saved securely. Ken Munro has carried out extensive analysis on this (and many other IoT devices) and points out that the audio files are available on the Internet, with poor access controls!

\n\n

So not only is everything your child says going to a server somewhere, others can listen to it...

\n\n

(piece of advice - for IoT security, follow Ken Munro. You'll be surprised at the access he gains to not just IoT devices, but through them to home computers, passwords and more!)

\n" }, { "Id": "1085", "CreationDate": "2017-02-20T08:23:28.207", "Body": "

I am not sure if this is the best place to ask a question like this.

\n\n

I am working with a wireless sensor network that measure temperature, humidity and illuminance. I have read that outside sensors, should be located in a place where rain does not hit them.

\n\n

I do not know why should I do this. How are these sensors affected by rain? Has anyone faced this problem before?

\n", "Title": "Temperature sensors affected by rain", "Tags": "|sensors|wireless|", "Answer": "

Building upon several years of deploying outdoor wireless sensor networks, I would like to add the following hint:

\n\n

Think ahead and do not underestimate the problems arising from humidity!

\n\n

I will answer your question by providing some pitfalls with humidity in outdoor devices. However, please consider these general guidelines when planning a wireless sensor network.

\n\n

It is very simple to build a housing that keeps the rain from falling onto your sensor, but as @Ghanima pointed out it very much depends on what you really want to measure.

\n\n

However, the main problem is not rain but humidity. This is roughly what happens with not properly constructed devices in outdoor environments, starting with a dry housing:

\n\n
    \n
  1. The outdoor temperature rises. The temperature of the air in your device rises even more (especially when placed in the sun).
  2. \n
  3. Therefore, the air pressure in your device rises, it expands and some part diffuses out of the device.
  4. \n
  5. The temperature falls. Thus humid (!) air diffuses back into the device.
  6. \n
  7. The humid air condensates inside the device.
  8. \n
  9. The cycle repeats, but the temperature inside the device is not high enough to evaporate the water again. Therefore, water accumulates inside the device. Every day a very tiny amount of water is added.
  10. \n
\n\n

Several possible approaches to counteract this problem:

\n\n\n" }, { "Id": "1090", "CreationDate": "2017-02-21T17:40:05.290", "Body": "

I was recently reading Amazon's information about the AWS IoT Platform, and came across an interesting example use case:

\n\n

\"Example

\n\n

Although they don't describe how exactly the road condition data is sensed, if the sensor can detect a wet road, why would Amazon suggest sending the data to the cloud? Wouldn't it be simpler just to directly process the sensor data on the vehicle and alert the driver, rather than sensing, sending data to the cloud, waiting for it to be processed, receiving data and then alerting the driver? I can't really see much of an advantage other than the possible analytics data you would gain.

\n\n

Is Amazon's example use case only beneficial when you want to gain analytics data, or are there other reasons that they would suggest to use the cloud?

\n\n
\n\n

I suspect one of the reasons is simply to make people use the service they're trying to sell, but I'm interested in technical reasons, if there are any.

\n", "Title": "Why might data be sent to a cloud service when it could be processed on the edge?", "Tags": "|sensors|aws-iot|cloud-computing|", "Answer": "

There are many factors in choosing whether to process data on-device or in the cloud.

\n\n

Benefits of processing in the cloud

\n\n
    \n
  1. If the algorithm uses floating-point or runs on a GPU, it might not be possible to run on the embedded processor in the sensor.

  2. \n
  3. Even if it doesn't, if the algorithm was developed in a high-level language, it might be too expensive (in developer time) to port it to run on the sensor.

  4. \n
  5. Offloading computation from the sensor may increase its battery life (depending on how this affects network/radio use).

  6. \n
  7. Running the algorithm in the cloud allows it to combine the data from many sensors and make a system-level decision. In this example, that might mean filtering across different cars' sensors, so that washing one car doesn't cause a rain warning in every car.

  8. \n
  9. Processing in the cloud allows to distribute the information to many places without having to have a mesh network, which is a complicated architecture.

  10. \n
  11. You can log more data, which enables better analytics, audit, and development of better algorithms.

  12. \n
\n\n

Benefits of processing on-board

\n\n
    \n
  1. If the raw sensor data is high-bandwidth, it might use less battery to summarise the data and send the summary (depending on what processing is needed to summarise it). This might mean that instead of sending an 8-bit moisture reading 100 times a second, you filter it and send a 1-bit wet/dry flag every 10 seconds.

  2. \n
  3. You might go further, and only wake up the network at all when the sensor has something interesting-looking to report (e.g. the wet/dry state changes)

  4. \n
  5. Reducing the network bandwidth at the sensor end also reduces it at the server end, so you can scale the service to more users (more sensors) very cheaply.

  6. \n
  7. It might be possible to run the service with the same or reduced functionality even when the network is unavailable. In this example, your car might be able to warn you about slippery roads it sees itself, but not give you advance warning from other cars.

  8. \n
\n\n

Overall

\n\n

Usually, some combination of the two is optimal. You might do as much processing as you can afford to do on the device, to reduce the need for the network as much as you can, and then run more sophisticated algorithms in the cloud that can combine more inputs or use more compute power.

\n\n

You might start out running all of your processing in the cloud (because it was prototyped in Matlab or Python) and port parts gradually to Rust to enable offline functionality, when you have developer time to spend on it.

\n\n

You might process the data heavily on the device in normal use but also sample and log the raw data sometimes, so that you can upload it to the cloud later (when network is more available) for your analytics.

\n" }, { "Id": "1092", "CreationDate": "2017-02-21T19:29:28.017", "Body": "

I might be putting this into software terms, but I just want to have all my things be the same type, but have multiple instances (multiple things). Each thing must be able to be referenced individually as well, and individually subscribe to messages. Then, I could have multiple Raspberry Pi's send data back to AWS-IoT while also each could subscribe to a unique message. Thank you.

\n", "Title": "When setting up 'Things' in AWS IoT, can I have one thing, and then have many instances of that Thing?", "Tags": "|aws-iot|aws|", "Answer": "

After doing some further research, I'm pretty sure Thing Types are what you want.

\n\n
\n

Thing types allow you to store description and configuration information that is common to all things associated with the same thing type. This simplifies the management of things in the thing registry. For example, you can define a LightBulb thing type. All things associated with the LightBulb thing type share a set of attributes: serial number, manufacturer, and wattage. When you create a thing of type LightBulb (or change the type of an existing thing to LightBulb) you can specify values for each of the attributes defined in the LightBulb thing type.

\n
\n\n

Thing Types do not mean all the devices are treated as one device; each Thing receives its own ARN regardless of whether it has a Thing Type or not.

\n\n

Each Thing should be able to subscribe to a custom topic (if you're using the MQTT broker), even though it has a Thing Type. The only difference is that Things with a Thing Type are given certain (immutable and fixed) attributes which can define properties for that particular type of Things.

\n\n

If you want to send messages from all your Things as if they are one, just publish to a common MQTT topic not specific to one device.

\n" }, { "Id": "1093", "CreationDate": "2017-02-21T19:44:19.047", "Body": "

I am trying to set up IP whitelisting for my Mosquitto broker on Windows 7. To do so I have performed the following steps, based on this article: How to Whitelist Your IP - Windows Dedicated.

\n\n
    \n
  1. Open Windows Firewall With Advanced Security from Start.
  2. \n
  3. Select Inbound Rules from the list on the left.
  4. \n
  5. Search for the rules called \"mosquitto\" there are 2-2 for TCP and UDP. (I do not know why there are two for each.)
  6. \n
  7. Open Properties of the mosquitto TCP rule.
  8. \n
  9. On the Scope tab, on Local IP address section select the These IP addresses and add the specific IP address. 192.168.1.5 in my case.
  10. \n
\n\n

First I have received the following error.

\n\n

\"Picture

\n\n
    \n
  1. To solve it, the Edge traversal settings has to be modified on the Advanced tab. I have changed it from \"Defer to user\" to \"Block edge traversal.\"
  2. \n
\n\n
\n\n

Conclusion. It does not work, I cannot connect to the broker from the 192.168.1.5 address. It is all the same if I select the \"Allow edge traversal\" option.

\n\n

Once I switch back to the \"Any IP address\" my client connects without any problem.

\n\n

What's wrong?

\n", "Title": "How to modify Mosquitto's Windows Firewall Inbound Rule to only allow connections from specific IP addresses?", "Tags": "|security|mqtt|mosquitto|microsoft-windows|whitelisting|", "Answer": "

Could you make custom rule by typing the ports (1883 and 8883) and allowing separately with different rules both UDP and TCP on these ports.

\n\n

See: https://technet.microsoft.com/en-us/library/cc947814(v=ws.10).aspx

\n\n

This post says you need the described hack that did not work in your case to change the defaults by program name.

\n" }, { "Id": "1099", "CreationDate": "2017-02-22T09:41:21.273", "Body": "

I'm a .Net developer recently started working on WindowsAzure IoT, but my role is just creating web API's to provide data to client (Mobile App) sent by IoT devices.

\n\n

I want to send commands to devices, instead of just receiving data sent by devices.\nYou can say that I just want to work on a simple demo project as startup, so I will get to know how IoT works real time and what is the exact flow of communication between IoT Hub and Devices.

\n\n

My questions are :

\n\n
    \n
  1. Which is the best Open Source/Free Trial IoT Hub/Platform to use (Already used WindowsAzure trial and it's expired, so it's not in option).

  2. \n
  3. Which is the best Programming Language to use for this demo, I think language will depend on which IoT Hub we choose, if I'm not wrong (Already have knowledge of C#, JavaScript and SQL Server in database, but I'm ready to learn new language if required)

  4. \n
  5. Which Device is cheap and best for this demo

  6. \n
  7. The most important, where to check Step by Step Tutorial for development.

  8. \n
\n\n

Any help is appreciated ...

\n\n

Feel free to ask if you have any doubt related to my query.

\n", "Title": "How to start IoT development to send commands from IoT Hub to Device", "Tags": "|networking|hardware|communication|", "Answer": "

Arduino is an open source platform for IoT test projects, and you'd buy either a Arduino device or a cheap derivation, that may have cheaper price or better characteristics.

\n\n

Same Arduino IDE can be used for all variations, you'll use IDE to install the software and test your programs. Program has one time and loop parts where you can build your stuff by C++ like language.

\n\n

C++ basics on memory usage may be useful for programming language side.

\n\n

With more electronics skills you may also develop your own circuits for device, it is possible by the licence.

\n\n

[1] https://www.arduino.cc/en/Guide/Introduction

\n\n

[2] https://www.arduino.cc/en/Guide/windows

\n\n

[3] https://en.wikipedia.org/wiki/List_of_Arduino_boards_and_compatible_systems

\n" }, { "Id": "1104", "CreationDate": "2017-02-22T17:22:04.787", "Body": "

This is a subject I have been thinking of for a while, especially because the \"IoT\" concept has been floating around a lot lately.

\n\n

I will start with what I mean when I say \"IoT\". I know that the term IoT could mean different things and that sometimes it's misused. It could be a term that is not clearly defined and can lead to big discussions around what does it exactly mean, I myself don't know the proper and widely accepted definition of the term. So for me IoT is a concept, a concept that defines the ability to connect to an embedded device remotely through the internet either from another embedded device or from a cell phone. As simple as that.

\n\n

On this context, the purpose of the connection doesn't matter, if you can connect one device in your office with another one at home, or if you can connect to one device at home from your cell phone, all this through the internet, then we are talking about IoT devices (the embedded devices, not the phone).

\n\n

So, having agreed on what I mean by IoT I will now describe what I'm trying to achieve.

\n\n

What I'm trying to achieve is precisely that what I describe on my definition of IoT.

\n\n
\n

I want to have one or several embedded devices at home connected to my internet router, either by ethernet or wifi and be able to connect to them remotely with another embedded devices in a remote location (and by remote I mean not on the same network) and maybe also to be able to connect to them with a monitoring app on my phone

\n
\n\n

For example, I may have a simple embedded device acting as an on/off switch hooked op to my garage door opener and another embedded device acting as a big red button on my desk at work so that I can push the red button in my desk and the garage door opens.

\n\n

Another example would be to have an embedded device with ADC capabilities that can monitor the temperature of my house and send me an alert when it reaches a threshold. The notification could be received either by a simple android app or by another embedded device with a little screen sitting on my desk at work.

\n\n

These examples may be silly but are just to illustrate the possible scenarios and use cases for what I'm trying to achieve. At the end, the idea is the same, connect one embedded device with another through the internet.

\n\n

Another thing to clarify is that the data exchange between these devices will be very lightweight, just a couple of bytes every time, it is not that hundreds of kilobytes are needed to be interchanged between devices.

\n\n

Additionally, the kind of \"embedded devices\" I'm referring to are simple but capable devices based on 100MHz or 200MHz cortex-m4 microcontrollers. And that is important to clarify because there won't be any Linux or complex libraries running on those devices. In the end, is such a waste of resources and completely unnecessary to have a powerful processor running Linux just to turn on and off a light bulb. In any case, I'm planning to use a BeagleBoard, Raspberry Pi, or any other board like that as my embedded devices. Just Microcontrollers because no more complexity than that is needed.

\n\n

I don't know much about IoT platforms and those kinds of complex solutions out there. When I started this journey of finding out a way of connecting one embedded device with another through the internet I stumbled upon a couple of sites with IoT services.

\n\n

I know that there are some IoT cloud services like:

\n\n\n\n

Just to name a few. The main issues with those are cost and complexity. You have to pay to get those services and also you have to learn how to implement all the services they have, in case you need them all, and their APIs and maybe a bunch of other stuff that doesn't seem necessary to me to be able just to interchange some bytes between devices. I just want something simpler than that, something I can do myself.

\n\n

You may say that implementing my own \"cloud\", if that is something I have to do, is not simple and sometimes is better to use those kinds of services for the sake of simplicity but there are two main reasons I want to know how to implement my own IoT services.

\n\n

The main reason is that I want to do it myself. I don't want to rely on a 3rd party to connect my devices to each other and since I'll be developing the code and the hardware for my devices then it feels better to also create my own means to connect them as IoT devices.

\n\n

The second reason is to learn how to do it. By knowing all the necessary things I need to achieve this, I will have a better understanding about the IoT world.

\n\n

Also, I want to mention that I'm proficient in C and I use Linux as my everyday OS at work as well as in my home, so please avoid windows stuff because that is useless to me. I'm not afraid of anything I have to implement in C for my embedded devices or on Linux to implement whatever is needed to achieve my goal.

\n\n

So my question is, what is it necessary to implement, and where, to be able to connect two or more embedded devices to each other with the purpose of data interchange between them?

\n\n

This question What can I use to create an IoT on our own server? have something similar but is closed and doesn't have any answers, also assumes an already existing cloud infrastructure to be used. So it doesn't help me.

\n\n

This other post What IoT services are available for storing/sending/publishing generic data in the cloud? has a similar question but the OP is asking explicitly for IoT services and I'm trying to avoid those.

\n", "Title": "What do I need to create my own personal cloud for IoT devices?", "Tags": "|networking|microcontrollers|", "Answer": "

You've questioned both previous answers about the need for a controller/hub. Consider that to make things happen, you need rules to exist. If you want to push a big red button to open a garage door, some rule has to tie the sensor (button) to the desired action (opening the door). There are two ways to make that happen: you can put the rule directly in the button, or you can put the rule in a separate computer.

\n\n

Let's think more about the direct solution. If you teach the button about the garage door, then your button holds the rules internally. The button needs the ID of the garage door, so if you replace the garage door, the button doesn't work. If the button is on your desk, and your house uses a proprietary network, the button has to know both the address of your home's gateway and the address of the door. The button needs to know the specific protocol to signal your door to open - do all manufacturers make compatible buttons that know about all door signals? The button can't do anything else unless you reprogram it - do you have a flash programmer for the button's chip laying around, and is that programmer compatible with any other devices? If you want the garage door to open, and 5 minutes later close, your button needs all the complexities of maintaining a real-time clock. Your button won't know the state of the door, making it hard to know if you're closing the door or opening it. And how do you back up the rules so that if your button breaks, your replacement button can do the job? On the plus side, it sounds cheap: you don't need a separate computer.

\n\n

With a controller, things are different. All messages from all sensors are delivered to the controller. Each sensor is simple: send the signal to the controller. The controller can then apply whatever inputs are needed to very complex rules: it can check the sunshine sensor and not turn on the outdoor lights unless it's dark, or not run the sprinklers if the average rainfall for the month is above average and the current temperature is five degrees below average. The controller can keep track of state. This could be important if you want a \"close garage door\" button but not an \"open garage door\" button (when I'm away from home, I rarely want to open the door, but I definitely want to close it if it is accidentally left open.)

\n\n

The controller can provide the place for device drivers that know how to listen to buttons and other drivers that know how to speak to doors. The controller may be more upgradeable to new devices and device types than a tiny chip tucked inside a button.

\n\n

The controller can also have more complex logic for infrastructure tasks such as delivering messages by offering certain levels of service. For example, the MQTT protocol allows for three different levels: try to deliver the message once, deliver it repeatedly until it's been seen at least once, or deliver it once and only once.

\n\n

The controller offers an architecturally logical place to consolidate all messaging to and from a communication gateway, allowing the use of an external interface. This means your button and your phone can both send signals, and the rules can figure out that either one is allowed to open the garage door. The gateway can also provide the security. You don't have to put your button and your garage door on the internet; you can put them both on private isolated networks and use the gateway to carry the signals. The controller also provides a single point to back up all the rules for your system.

\n\n

The downsides to the controller are added latency and extra complexity. Surprisingly, a controller doesn't make the cost go up appreciably. You can implement a controller on a Raspberry Pi for less than the cost of one average remotely controllable light switch. Don't discount the idea on the basis of cost alone.

\n" }, { "Id": "1108", "CreationDate": "2017-02-23T08:48:25.077", "Body": "

I have been coding embedded systems for (*cough*) decades now. Mainly telecomms & satcomm, with some telemetry & SCADA. I can also produce Windows, Linux & browser based apps and have good database knowledge.

\n\n

Sound like prime IoT developer material? I also have some free time, so, to keep me out of the pub, I would like to start a project.

\n\n

I would prefer something which grows in phases. Maybe develop the server, then the clients, then some browser-based reporting, maybe some test tools. Perhaps a phase one with minimal functionality, then phase two adding more features, etc

\n\n

I would like something to occupy my evenings and weekends for months, maybe years. I am undecided as whether to develop something open source, or something where I stand a slim chance of turning a shilling.

\n\n

The one Teensy flaw in my grandiose scheme is that I haven\u2019t go the faintest glimmer of an inkling of the beginning of the kernel of a close as what to develop.

\n\n

Does anyone have a suggestion for me (preferably not involving hairbrushes)?

\n", "Title": "Seeking an idea for an IoT project to keep me occupied", "Tags": "|software|", "Answer": "

Another approach is to buy the components and make them run,and then you decide what do to with the them. Surely you will come up with ideas while developing

\n\n

You can start with one of the adfruits kits.

\n\n

https://www.adafruit.com/

\n\n

\"Adfruits

\n\n

Once you can control the I/O then the path to the remote control from the server is quite straightforward. You start turning On/Off LEDs then you can add relays and power something bigger, a Fan, Lights, motors, water pump... etc

\n" }, { "Id": "1112", "CreationDate": "2017-02-23T09:59:34.987", "Body": "

I have a bunch of IoT switches connected to my Wi-Fi.

\n

I am aware of three possibilities to connect and control them.

\n
    \n
  1. Through the Wi-Fi directly (like Samsung SmartThings does)
  2. \n
  3. Connect them to a personal VLAN and use them (seems more secure).
  4. \n
  5. Connect all the devices to a Raspberry Pi (or something similar) like a master and connect the devices to it.
  6. \n
\n

Which one would be the safest (most secure for IOT) comparatively?

\n

Are there any better solutions and how difficult would each one be?

\n", "Title": "Connect IoT devices directly to Wi-Fi , through VLAN or through a Raspberry Pi?", "Tags": "|networking|security|wifi|", "Answer": "

John is on to a solution that should work. Another alternative is to run all your IoT devices on a WiFi guest account, and everything else on the main account/password. This is a simple way to separate your smart devices from your computer network. It's a less sophisticated method of security but a lot easier to implement.

\n" }, { "Id": "1115", "CreationDate": "2017-02-23T16:05:28.480", "Body": "

This question originates from a question asking about a specific detail about building wireless sensor networks. While answering the question, I wanted to share some general guidelines for the planning process of a wireless sensor network.

\n\n

So let's consider we want to build a new wireless sensor network deployment. What is the best approach to avoid common pitfalls and mistakes others made before?

\n", "Title": "What should be considered when building a wireless sensor network?", "Tags": "|sensors|wireless|", "Answer": "

Please do not waste your time and make the same mistake as hundreds of research groups (including ours) made before for decades and just throw some unspecific sensors into the wild without knowing what you really want to get in the end!

\n

There is a nice paper from 2006 (!) that shares experiences from a real-world deployment.

\n
\n

Langendoen, Koen, Aline Baggio, and Otto Visser. "Murphy loves\npotatoes: Experiences from a pilot sensor network deployment in\nprecision agriculture." 20th International Parallel and Distributed Processing Symposium (IPDPS) 2006.

\n
\n

Be prepared for those and many other problems that could arise and plan ahead and focus on your target!

\n

You should ask yourself the following question: Why do I want to build the deployment? Is it really the data itself that I want to collect, do I want to evaluate and develop network protocols or do I want to develop and test new hardware? The answer results in very distinct paths:

\n

I want to get the data!

\n

In that case, try to rely on proven practices as much as you can. Buy standard hardware, use industrial grade housings, provide much more batteries than you think your hardware requires and monitor them! Use already existing and well-tested software and do not build everything from scratch! Even think about the following: Do I really need wireless connections?

\n

Of course, there are many applications where you really need hundreds of energy harvesting, wireless, self-organizing and tiny devices. But just using these techniques because they are cool is a waste of money and time.

\n

If you really want to get the data, nothing is more frustrating than to notice that just for the most interesting day, no data is available because water has accumulated in your devices (been there...).

\n

I want to improve protocols for wireless sensor networks!

\n

In that case, really focus on the core. I talk about network protocols here, but it equally holds for all other procedures and algorithms in the IoT context.

\n

For most protocols, it does not care if they transport real-world data or just some pseudo-random noise. So why not take the easy road, throw away your sensors and just generate some random data? I recommend the following procedure:

\n
    \n
  1. Think about which problem you want to solve. What is your research question?
  2. \n
  3. Read! Many things have been done already. Many concepts have been shown to be good, many others not. Starting with network protocols from scratch is just a waste of time.
  4. \n
  5. Do some theoretical evaluations. Is it really possible to improve a given protocol or is it already at a principle boundary? Shannon can not be fooled!
  6. \n
  7. Do simulations. I suggest the OMNeT++/INET framework, but there are many frameworks out there. But please do not start from scratch. Most components are already there for your convenience. Test if your ideas work in the controlled environment of a simulator.
  8. \n
  9. Work on the hardware implementation. Does your implementation work at least on your desk?
  10. \n
  11. Test it in an already existing testbed. One example is the FIT IoT-LAB. This allows you to test your implementation with real-world hardware without the burden of all the problems arising from self-made testbeds.
  12. \n
  13. Now you can finally plan your real-world deployment and tailor it to the specific problem that you want to address. Until now you should have a good idea how dense your network has to be, how many devices are meaningful, how they should be distributed, which kind of data has to be provided and so on. Then go to "I want to get the data!", but this time your data is the performance measure that you want to test.
  14. \n
\n

Yes, this is a long way to go, but there are students doing this during a six-month master's thesis, so it is feasible and definitely worth the effort! There is already so much existing research in this area that skipping a step does not pay off in the end.

\n

I want to build cool hardware!

\n

If you are mostly interested in building cool hardware, start with playing around with existing hardware. Then think about what this hardware lacks and what could be improved. Maybe you just want to create a nice and waterproof housing and see how it works in practice.

\n

You will need several iterations anyway, so start with something oversized (e.g. in terms of RAM or persistent memory) and then strip away unnecessary parts in future iterations. This is much more satisfactory than recognizing that the software you want to use is just 1 KB too large after production.\nAlso, provide good possibilities to debug and evaluate your hardware.

\n

Even if you do not need a serial or USB interface for the final application, it speeds up the development a lot. If you are actually building a housing, attach a humidity sensor and monitor it constantly instead of just waiting and checking manually. If you integrate an energy harvester, monitor the energy flows, even if a precise power measurement IC might be oversized for a final application.

\n

For the software part, rely on existing components! If you are building a testbed because you like to build hardware and you do not really know what to do with it, publish it! There are many people (see above) that dream to have access to a real-world deployment, so they will happily provide software.

\n" }, { "Id": "1117", "CreationDate": "2017-02-23T19:57:16.927", "Body": "

I received some good answers in the question What do I need to create my own personal cloud for IoT devices? and one of the things that I understood from there is that I need to \"expose\" my HUB or GATEWAY to the external internet. The proposed solution for that is port forwarding.

\n\n

I created this as a separate question because it would be difficult to properly to follow-up just with comments on all the answers, someone could get kind of lost. Also, this information may be useful for somebody with a similar question.

\n\n

I don't like the idea of having to go to my router configuration and configure the port forwarding because that means that I have to configure a device that in spite of being part of the IoT infrastructure, is not one of \"my\" devices. It has to be as less disruptive of the already existing home network as possible. Also, I've had instances where I don't know the admin password of a particular router and it has been really difficult to get it.

\n\n

I'm sure that there is a way around that even if that means having a more powerful IoT HUB maybe running Linux, I just don't know what that could be. It is OK to have a bit more complex HUB if that \"alternate\" way allows avoiding that port forwarding configuration.

\n\n

I say that I'm sure there is a way thinking about how applications like team viewer don't need to configure port forwarding.

\n\n

So the question is, does anyone know a way of \"exposing\" an IoT embedded device to the external internet in order to access it from anywhere in the world that does not involve port forwarding?

\n", "Title": "How do I avoid port forwarding when exposing IoT devices to the external Internet?", "Tags": "|smart-home|networking|", "Answer": "

Take a look at this ssh! noports I wrote it for to be able to get to my home office/IoT widgets without having to open up ports to the Interwebs..

\n" }, { "Id": "1119", "CreationDate": "2017-02-24T08:32:54.473", "Body": "

I see several questions asking about details of an IoT network, including this one about port forwarding for example. I think it would be useful to ask about what might be considered the typical baseline architecture for a general-purpose IoT system.

\n\n

We have several questions talking about networking at the sensor side, if mesh-networks are suitable, etc. For this question, I'm less interested in these - they can be generalised as short-range wireless connections. I'm also not particularly interested in the detail of the local network between nodes, except where the details directly influence the overall network topology.

\n\n

I'm not looking for an exhaustive description, just capturing of the current norm. What general network topology is in typical use today, and provides a good scalable model covering at least these features:

\n\n\n\n

I'm not looking for inventions here, or answers that go deep into the specific corner cases. I also want to exclude security, except if any aspect of the topology is essential for good security (which I'm assuming is so obvious it doesn't belong on the feature list above)

\n", "Title": "What is the typical network topology for an IoT network?", "Tags": "|networking|system-architecture|topologies|", "Answer": "

For simplicity, I'll describe this using a typical smart-home setup as a reference, but nothing here is really fixed by the application. The high level topology is equally suitable for a farm monitoring application covering several kilometres with thousands or sensors, tracking parking spaces in a city, or lighting management in an office building.

\n\n

I'll treat the problem approximately in layers of device hierarchy, which might correspond to increasing complexity of an installation, or pulling in specific use case scenarios. Here is my generalised diagram covering the whole network.

\n\n

\"enter

\n\n

Node level\nThe individual node in my diagram is a WiFi connected lamp with a local physical override switch. The node often has both sensor and control functions, and a small amount of local compute/storage. Ideally, the node can act autonomously. The node can take control from local switches, directly over the LAN (if it has WiFi/Bluetooth), or from either the local hub or the cloud. A node will frequently maintain a persistent TCP connection with the hub or cloud.

\n\n

One location typically contains several nodes, with different functions, using various connectivity options. A smart-home might collect indoor/outdoor temperature, activity and video data. Remote sensors may use disparate connections to the internet. Nodes typically use microcontrollers, often at low clock frequencies.

\n\n

Hub Level In a smart home, there might be several hubs (one for each device vendor), aggregation or hierarchy. The hub can be combined in the router, or stand-alone. The hub doesn't even need to be active in the network (other than to forward packets). However, the hub might be responsible for relaying commands to a node - commands originating from either other locally connected nodes or from a remote server. The hub might implement store-forward of data, compression or filtering of data. Really, the hub is just a facilitator. Today, the hub is the first part of the network that has the ability to provide public DNS, which makes it able to publish network structure information to the full system. As described in this question a hub is often necessary to bridge between Wired/WiFi TCP-IP to a low-power radio protocol, such as zigbee or bluetooth-low-energy. Hubs are usually built around microprocessors, and are less power-constrained than nodes.

\n\n

Roaming Terminals Otherwise known as your smartphone. These are often the primary point of user interaction. A simple node can present it's entire user-interface through a smartphone, once the node can establish either a direct or a mediated link with a specific device. Achieving this fundamentally requires a mechanism for establishing trust/ownership/pairing. A terminal can establish if it's own hub is on the local network, if it needs to perform all communication via an external server, or if it is able to lookup the IP address which allows direct routing to it's 'home' hub. The latter scenario usually requires that the router is configured for port forwarding.

\n\n

Cloud service It is common for the cloud service to perform the majority of the work in the stack, although this is not always necessary (and not all implementations will require any cloud function). The most useful feature that an external (publicly addressed) server can provide is orchestration. Every node and intermediate element of the network is usually able to communicate over a direct channel to this server, and the server can easily pass messages from one device to the others. The server can aggregate data and present visualisations to the user. Based on the user's configuration, it can also forward information to other users (access and heating control can be granted to guests for example, flood/fire/intrusion warnings could use other network options to generate alerts). The cloud is also well placed to take in other data sources, so modifying heating profiles based on forecast and calendar information, or feeding local sensor information into local forecasting models, generating alerts for utility providers and so on.

\n" }, { "Id": "1130", "CreationDate": "2017-02-27T09:05:48.303", "Body": "

In my college project (Smart Home System) there's a functionality in which if someone knocks on the door an image has to be displayed on a monitor (in a browser). I am implementing the door knock sensor (Piezo) using an Arduino which somehow has to send commands to the Raspberry Pi to take a photo, which is to be sent to a different computer's browser. There are several other modules like this. Everything is connected to a same WiFi network.

\n\n

Now I can hopefully make it work somehow using PHP and MySQL and several Ajax requests running constantly, but that's probably not a very neat way to do it. I've heard of node.js and web sockets but I am not sure I have time to learn it.\n (I can if it's absolutely necessary)

\n\n

Anyway can anyone tell me which is the right way to implement this type of system? It would be really helpful.

\n", "Title": "Confused about which technology to use in Smart Home System", "Tags": "|smart-home|raspberry-pi|", "Answer": "

Push vs Poll

\n\n

Your proposed solution of sending frequent AJAX requests sounds a lot like polling - you're sending a request every so often to check if the state has changed. It would make far more sense to push changes to the server when the piezo sensor detects a change.

\n\n

It's the difference between this:

\n\n
\n

Server: Is there someone at the door?\n Sensor: No.
\n Server: Is there someone at the door?\n Sensor: No.
\n Server: Is there someone at the door?\n Sensor: No.
\n ... repeat ad infinitum ...

\n
\n\n

And this:

\n\n
\n

Sensor: There's someone at the door!

\n
\n\n

The first example is polling, and the second is pushing. You can tell which one will have lower power usage, less complex code and reduced network usage.

\n\n

HTTP or Something Else?

\n\n

An AJAX request is sent over HTTP, so it's quite heavyweight and requires several TCP handshakes per connection (unless you use Keep-Alive).

\n\n

It may be worth considering alternative protocols such as MQTT (there's some good explanation in the question 'When and why to use MQTT protocol?', which has a very similar problem to yours).

\n\n

A message broker like MQTT might be a little bit more powerful than you really need in your current situation, but one MQTT broker could easily be expanded if you chose to add more devices to your smart home network, whereas your current system of AJAX requests would quickly fall apart. Imagine four or five different devices polling each other; it'd quickly lead to your network becoming overloaded and it would be a massive drain on power usage.

\n\n

Node and Web Sockets

\n\n

Using web sockets and Node would solve the issue of using push instead of poll, so it would be a good idea in my opinion. However, I suspect polling would work if you really didn't want to learn Node.

\n\n

If you want an extensible solution that will work when you expand your smart home, definitely go with pushing - it'll save a lot of trouble and tears. If you just want a quick proof of concept, polling will probably work.

\n\n

My personal advice is that you should either learn web sockets or investigate using a message broker like MQTT. You could use a client library like Mosquitto-PHP (with a guide by HiveMQ) to simplify using MQTT in PHP, or just go with Node and web sockets. I suspect the learning resources for Node and web sockets will be better, but MQTT tends to be favoured for smart home/IoT environments.

\n" }, { "Id": "1134", "CreationDate": "2017-02-27T15:00:55.507", "Body": "

I'm quite new to the Internet of Things world and until now I've only heard of MQTT.

\n\n

I want to research some other options apart from MQTT. Until now I've not found much more than Webshpere MQ and ZigBee. I know there must be dozens of other protocols, but I don't know how to select a few suitable ones without spending dozens of hours researching. That's why I reached out to a community that must know a few good ones.

\n\n

What other protocols are there available?

\n", "Title": "MQTT alternative (which is also compatible with C++)", "Tags": "|mqtt|", "Answer": "

Probably the biggest M2M protocol other than MQTT is CoAP. While it doesn't natively implement C++ per se, it does implement C and C#, which are not huge leaps if you already have a good understanding of C++.

\n\n

Another well known protocol is AMQP, for which the AMQP-CPP library is available, enabling C++ implementation.

\n\n

WebSocket also appears to support C++. There are tons of alternatives out there: just pick one and take the leap!

\n" }, { "Id": "1137", "CreationDate": "2017-02-27T16:11:42.553", "Body": "

I'm considering getting an outdoor camera like Ring. Cameras like Ring are listed in the recommended compatible devices in the SmartThings market. So, I'm curious what sort of automation events are available with cameras like this?

\n\n\n", "Title": "How are cameras utilized in SmartThings?", "Tags": "|samsung-smartthings|digital-cameras|mobile-applications|", "Answer": "

I don't get what you mean by presence, but you can get notifications of door bell button push and / or when motion detection gets triggered.

\n\n

I understood from the specifications that with SmartThings you can switch either one of them on or off and also drive another SmartThings device like a light switch based on the notification.

\n\n

There's more information available in the SmartThings Documentation on the Ring Doorbell.

\n" }, { "Id": "1140", "CreationDate": "2017-02-28T10:46:18.503", "Body": "

I have been working on a project which involves creating a LoraWan network using:

\n\n\n\n

To send the data from the gateway to the server I am going to use MQTT.

\n\n

As for visualising the data I am going to create an application using AngularJS.

\n\n

So the problem I have had is that I was confused about connecting the node to the Server since I have found two methods (Over-the-Air Activation and Activation by Personalization) and does it affect how the Gateway and the server should be programmed?

\n\n

Also am I going to program the Gateway to send data to the server with MQTT or does all the programming happen in the Node?

\n", "Title": "How can I interface a LoraWan network with MQTT?", "Tags": "|networking|mqtt|lora|", "Answer": "

The standard approach is to connect your gateway to a LORAWAN Network server as well described above.

\n

If you want to connect the gateway directly to your broker you may use the MQTT forwarding function of your gateway: MQTT Forward Instruction.

\n" }, { "Id": "1146", "CreationDate": "2017-02-28T19:47:39.437", "Body": "

I'm currently working on a project and I need to measure how many people are in an area without any personal interaction. Basically I need to do this in parks. I don't have lot of budget for each place. All outdoor places. I want to control possible collections of people (fights or unusual collection of people).

\n\n

Some alternatives include:

\n\n\n", "Title": "Measure people density in an outdoor area", "Tags": "|sensors|", "Answer": "

Is it possible to have some weight measurement plates at the entrances? With a few weight measurement plates in a line at every gate you can identify movement directions, and by using the average weight of a person, you can approximate number of people on the plate at a time. Indeed, it would be possible to differentiate between an adult and two children, perhaps.

\n" }, { "Id": "1155", "CreationDate": "2017-03-01T21:57:17.113", "Body": "

I've been looking into Schneider smart panels recently for home monitoring. From what I'm reading, it appears that they have the ability to monitor your home's electric usage on all the various circuits real time. They are connected by Ethernet (not wireless, unfortunately), so it appears that I will be able to see statistics from my home computer.

\n\n

One thing, however, remains unclear to me: Are you able to reset a circuit when it blows from your computer, or will I have to trek down to the utility room every time I accidentally plug two toasters into the same circuit?

\n", "Title": "Can I reset breakers from my computer with the Schneider Smart Panel?", "Tags": "|smart-home|", "Answer": "

No, you would not be able to reset a breaker. Their \"smarts\" are for monitoring the panels and breakers, not operating them.

\n\n

It would be a huge safety issue if homeowners could remotely reset breakers. Breakers pop for a reason - there's an unsafe condition nearby. But if you could reset the breaker from anywhere on the globe, you would have no idea if the problem's been fixed or not, and energizing the circuit could start a fire. Or consider that someone local may already be trying to fix the circuit that popped, and you remotely energized it while they're working on it.

\n\n

Breakers on transmission lines are different, because they're repaired and operated by trained engineers who follow strict safety protocols. But home and commercial equipment has to be as safe as possible by default.

\n" }, { "Id": "1164", "CreationDate": "2017-03-03T17:06:44.980", "Body": "

If I'm in a private meeting at my home, I don't really want Alexa piping up with her idle chatter if I or someone else accidentally says the wake word.

\n\n

Equally, if I'm talking in my sleep (which I do a lot of), I don't want to accidentally wake up Alexa and disrupt my sleep.

\n\n

Is there a way to silence the Amazon Echo temporarily so it doesn't respond to accidental wake ups when I want it to remain quiet?

\n", "Title": "Keep Alexa from responding when I'm talking in my sleep", "Tags": "|alexa|amazon-echo|", "Answer": "

You can press the microphone on/off button on the top of your device to disable the microphone before you go to sleep. Here's what it looks like for the Echo Dot:

\n\n

\"Amazon

\n\n

Source: FASTILY of Wikimedia Commons, CC-BY-SA 4.0

\n\n

After pressing the microphone off button, your device will look more like this:

\n\n

\"Amazon

\n\n

Source: Hedwig Storch of Wikimedia Commons, CC-BY-SA 4.0

\n\n

Notice the illuminated microphone off button and the red ring. While the ring is red, your device will not be listening at all, until you press the microphone on/off button again. The Amazon Echo will look largely the same, except it's significantly taller. The icon on the microphone on/off button is the same as in the pictures above.

\n\n

Unfortunately, you can't turn off the Echo's microphone by voice\u2014the button must be physically pressed. Also, you can't turn the microphone back on by voice, because\u2014well\u2014the microphone is off, and can't listen to you, because you told it not to!

\n" }, { "Id": "1168", "CreationDate": "2017-03-04T13:59:59.843", "Body": "

Recently, Amazon S3 had an outage which caused lots of web services to go down, including IFTTT, which is often used to link IoT devices together (e.g. connecting your Alexa to some Philips Hue bulbs).

\n\n

Nest security cameras stopped working, TP-Link smart switches refused to turn on, and someone wasn't able to change their mouse sensitivity because it syncs with the cloud because of the outage, apparently.

\n\n

In a smart home with a few Philips Hue bulbs, an Amazon Echo and some smart switches, I'd like to try and avoid issues like that so my house doesn't 'go down' along with the cloud services.

\n\n

How can I figure out if my devices rely on one single service and avoid it if possible?

\n", "Title": "How can I avoid my IoT devices breaking when cloud services go down?", "Tags": "|smart-home|amazon-s3|", "Answer": "

As a consumer

\n\n

Your options are often quite limited as a consumer, but you can minimise your risks in a few ways through carefully selecting the products you use and how you connect them.

\n\n

Check what happens when your device loses Internet connectivity

\n\n

Usually, you can just do a quick Google search to see what happens when a certain device disconnects from the Internet. Some devices will simply fail completely if their connection to a remote cloud server is lost, like the Amazon Echo:

\n\n
\n

Your Echo requires an active Wi-Fi connection to speak, process your commands, and stream media.

\n
\n\n

Sometimes, there's a good reason (for example, the Echo has to stream commands to the cloud to process your instructions, as stated in 'Is the Amazon Echo 'always listening' and sending data to the cloud?'), but for others, it may just be an oversight or design flaw in your product.

\n\n

If you physically have the device, you could try unplugging your router to see what happens\u2014this might not be a great test, because it's more likely a remote server will break but local connections will still work, but it's something to try.

\n\n

With enough time to waste use productively to improve your setup, you could potentially sniff packets from your devices, then apply a router-level block to certain domains\u2014this way, you'd know what happened if mydeviceserver.com went down completely. Of course, this would take a long time so it might not be practical to test all of your devices in a large home with lots of 'smart' devices.

\n\n

Use local connectivity

\n\n

If you're just turning your lights on from your smart switch, you might not need to route all the traffic through the Internet, into a cloud server thousands of miles away, and back to your lightbulb\u2014you might just be able to route the command through local devices instead. A lot of the time, these devices will use a protocol like ZigBee or Z-Wave, so you might need a hub to co-ordinate the traffic (see 'Why do I need hubs for some devices when automating my home?').

\n\n

As a developer

\n\n

For developers of IoT devices, careful design of a device can avoid problems like the recent S3 outage from affecting consumers. Of course, IoT designers aren't always known for careful design, but if you're reading this, you're probably not in that group.

\n\n

Design services to be redundant

\n\n

For Amazon S3's recent outage in particular, there may not have been much you could do. There are some reports that cross-region replication could have potentially prevented services from going down, as explained in this question on DevOps Stack Exchange, but it's debated whether that's really true or just poor advice.

\n\n

If feasible, having some sort of redundancy or backup would be ideal\u2014although the costs are greater, the additional reliability is greatly needed\u2014otherwise, people's lights stop responding, power switches refuse to work, etc.

\n\n

Add better support for scenarios without Internet connections

\n\n

I listed 'Use local connectivity' under the ways that a consumer could avoid this issue, but it's a losing battle. The devices often don't support connecting in any other way than through their approved web service, and manufacturers are reluctant to spend developer time on this. If the support was greater, there would be less reliance on cloud services, which benefits the manufacturer too, because they don't need to pay for so much server capacity.

\n\n

With all these options, why were so many devices affected?

\n\n

Because no-one wants to spend the time\u2014designing any sort of reliable system takes a lot of time and effort, and it's often far more complex than the comparable 'dumb' solution (e.g. simple electrical switches).

\n\n

Why isn't software as reliable as a car? Because the software has so much more complexity, yet isn't tested nearly as rigorously as a car would be. The same issue seems to apply with IoT\u2014controlling devices through a network is far more complex, so things can go wrong far more easily, as we've seen with the recent S3 incident.

\n" }, { "Id": "1173", "CreationDate": "2017-03-05T14:23:50.600", "Body": "

In one of my previous answers, I mentioned that when designing a cloud service to connect to IoT devices, it would be best to make the servers redundant in some way so that if one data centre or server fails, the whole system still works.

\n\n

Sean Houlihane pointed out that having true redundancy would probably double the costs for a provider, making it technically infeasible.

\n\n

So, I'm interested to know how a cloud service (perhaps like the service used for the Nest thermostat, which went down in the S3 outage) could be made more reliable without duplicating every component, in a way that would be less disruptive to a company's business model.

\n\n

The sort of device I'm thinking of is something like a smart thermostat that needs to synchronise data from a phone app (outside the home's local network) to the thermostat itself, and stores the state in cloud storage like S3.

\n\n

What can I do to ensure that the cloud servers have a high availability without running two copies of every server in different locations?

\n", "Title": "How can IoT cloud services be made more reliable without excessive cost?", "Tags": "|cloud-computing|", "Answer": "
\n

...double the costs for a provider, making it technically infeasible.

\n
\n\n

Doubling costs isn't really \"technically infeasible\", it justs make the situation more expensive. However, it's not really that bad.

\n\n

I work for a service that rents dedicated hardware at three different data centers from three different vendors, one in the west, one in the east, and one in the midwest. Our client application automatically, constantly, and seamlessly switches to whatever center responds quickest. (The server side doesn't even have to do \"load balancing\"; the client side does \"load distribution\" instead.) Since starting up years ago, there has never been a moment when at least 2 of the 3 centers wasn't responding.

\n\n

The costs to rent and operate those servers is minuscule. We could easily expand to more data centers using petty cash, but it's not necessary, because the three existing systems provide plenty of redundancy and plenty of throughput.

\n\n

Computers are cheap. Outages are expensive and get you bad press. Redundant servers is the cheapest solution.

\n" }, { "Id": "1181", "CreationDate": "2017-03-07T06:34:10.393", "Body": "

Spiritually similar to the question here Embedded modem options

\n\n

Why is there such an enormous barrier to getting into something like a Snapdragon 410 with onboard LTE? After extensive research, LTE-equipped SoCs are everywhere, yet not available insofar as development kits, research material and so on.

\n\n

It's always \"contact sales\" or just a marketing brochure webpage along with literally 1 million phones that the SoC is tucked away in. If I actually want to do development with an LTE SoC (and I do) how do I even get started?

\n\n

I love my STM chips and the SIMCOMs are fun but it feels like diddling around in 2008 using that stuff.

\n", "Title": "Why don't we have more 3G+ modem options available?", "Tags": "|microcontrollers|mobile-data|", "Answer": "

There are many Android Modules based on LTE SOC like Qualcomm MSM8909 and Mediatek MT6735 MT6755 etc.

\n\n

NewMobi has a range of modules.

\n" }, { "Id": "1183", "CreationDate": "2017-03-07T17:24:50.840", "Body": "

I am working through the tutorial for the Android Things SDK and I have the following set up:

\n\n

\"pictures

\n\n

I hooked up the hardware via the Android Things SDK and have a working handler for the button/switch working but it only works once. After that I need to restart the board to get the handler to fire again.

\n\n

I'm new to this stuff so not really sure how to diagnose it. The SDK isn't reporting any errors and restarting the android app will re-print out all the available GPIO and messages I have so I know the board is \"frozen\", but that input will not respond again until I restart the entire board.

\n\n

Any help or explanation appreciated.

\n\n

Setup:\nIntel Edison. Red = power, black = ground. 10k resistor is connected to power (sorry for the photo).

\n", "Title": "Android Things: GPIO button/switch handler only responds once", "Tags": "|android-things|intel-edison|", "Answer": "

Sorry for the late reply. It turns out the problem was configuration related. I was using a 10k ohm resistor and I should have been using a 330 ohm one. The issue, in my opinion, stemmed from some decently confusing images in the Android Things \"Getting Started\" guide and the fact that the Raspberry Pi is more popular (so the instructions were geared toward that specific configuration).

\n\n

Thanks to everyone who replied. I figure this is at least useful in case someone who's super new to this (like me) makes the same mistake.

\n" }, { "Id": "1184", "CreationDate": "2017-03-07T18:54:08.183", "Body": "

I am trying to understand the vulnerability of my home IoT devices. From what I understand, the big DDoS last year was caused by compromised IoT devices with generic or default usernames and passwords.

\n\n

All my IoT devices are behind my firewall (like my thermostats - Honeywell).

\n\n

They connect to the internet, but outgoing only. I have no port forwarding setup.

\n\n

If all my IoT devices are behind my router's firewall, and I do not have any port forwarding to those devices, where is the risk with having default usernames and passwords?

\n", "Title": "How do I protect my home from IoT devices being compromised and being used for DDoS attacks?", "Tags": "|smart-home|security|routers|", "Answer": "
    \n
  1. In addition to very nice discussion above, you can be own security expert starting with nmap, flagship tool from Insecure.Org so you can perform basic scan of target device(192.168.1.1) with simple command:

    \n\n

    [nmap -A -T4 192.168.1.1]

    \n\n

    More details, examples and hints how to scan yours network could be found on Nmap Cheat Sheet pages.

  2. \n
  3. However, scan every single IoT device in yours network and recheck every device with suspicious port/ports opened.

  4. \n
\n" }, { "Id": "1190", "CreationDate": "2017-03-08T12:56:47.153", "Body": "

I just started looking into MQTT protocol.

\n\n

Situation

\n\n

In my college project, currently, I use Arduino as main MCU and do every work in that and use Serial Comm. to send AT commands to esp8266 (for HTTP requests, to run scripts on server etc). I basically needed to push some data(from Arduino) on a webpage(hosted by a local server). I searched and found about MQTT protocol which enables to publish and subscribe data on clients(Exactly what I wanted). But most of the tutorials I am finding are either entirely on Arduino(with wifi shield) or entirely on esp8266.

\n\n

What I want to know is that is there a possible way to use MQTT as with my current configuration? That is, using Arduino to do all the work and by using its serial comm. Publishing data just by AT commands on MQTT.

\n\n

Additional Information about my project is mentioned here : Confused about which technology to use in Smart Home System

\n", "Title": "How to use MQTT on Arduino which uses serial com to send AT commands to ESP8266", "Tags": "|mqtt|esp8266|", "Answer": "

I finally found a tutorial by Sony Arouje. As it turns out I had to completely abandon manual sending of esp commands and had to use the library (WiFiEsp.h). Hint...It's better!

\n\n

Though initially it didn't work with my esp8266 because it had an older firmware and reported error \"firmware not supported\". I had to flash a newer Firmware (works with version 1.54 in my case). Anyone having problem flashing firmware may find some help referring this topic: Can't Flash ESP8266 latest firmware, says "Fast Flashing error" and "Invalid head of packet(' ')"

\n\n

Also, I have saved a copy of all download tools and the tutorial page itself, so if in the future, the post is removed or if anyone doesn't finds the tools to flash. Get to me in comments or something (not posting here because I don't know if it's ok to post someone else's content)

\n" }, { "Id": "1194", "CreationDate": "2017-03-09T08:41:31.397", "Body": "

I am working on the Azure IoT platform, and I understand how devices send data to the IoT hub (if I am not wrong, It is just web service call or something similar to that).

\n\n

But I wonder how the IoT hub sends Data/Command/Input to the devices, because we are not working on the IoT hub for Device communication (we don't have any requirement to push data to devices). Can the IoT hub directly interact with the devices? (Using the unique Id of device or using any unique identity like IP, Mac address, etc).

\n\n

Somewhere I've read that devices keep requesting to the IoT hub if IoT hub have any input for them, and the IoT hub then sends Data/Command/Input to devices in response. Is that true? If not, then please explain.

\n", "Title": "How does an Azure IoT Hub interact with Embedded/IoT devices?", "Tags": "|hardware|communication|aws-iot|microsoft-windows-iot|azure|", "Answer": "

The model that IoT Hub connected devices use is that they will never accept incoming connections. IoT Hub devices never act as a 'server', and this is a crucial part of the security model in Azure IoT. The definitive model on this is encapsulated in Clemens Vasters' 'Service Assisted Communication'.

\n\n

Therefore devices are always 'polling' an external service in order to send data or receive commands. The APIs make it look like data is being sent to a device, but it is always the device making the outgoing connection.

\n\n

IoT hub does this in two ways:

\n\n
    \n
  1. By sending data to the device endpoint /devices/{deviceId}/messages/devicebound. This is an AMQP messaging endpoint, similar to a queue or topic subscription. The device, when reading commands, needs to acknowledged receipt if needed, which is part of the underlying AMQP protocol. This works the same with MQTT, and https is a valid fallback.\nThe API wraps all of this up for you. There are additional concepts, such as 'direct methods' which are an API wrapper around essentially the same underlying message protocol
  2. \n
  3. By using the server-side device twin, which is a way to logically keep properties in sync between device and server. You set a property on the device twin, and when the device syncs up that property will be synced to the device. This is less message-based and built on top of the LWM2M device management protocol.
  4. \n
\n\n

A lot of the 'polling', connecting, sharing connections, receipts, etc should be taken care of as part of the AMQP (or MQTT) protocol, which in turn is wrapped up in the IoT Hub SDK. So the above is highly simplified, but to reiterate, IoT Hub cannot, and will not (ever) try and send data to a ip address/port on your device.

\n" }, { "Id": "1196", "CreationDate": "2017-03-09T14:07:22.870", "Body": "

I am the system administrator for some friends who have a couple of web-cams for professional surveillance at a small commercial plant. The cameras are D-Links from the DCS series.

\n

Currently, they are accessing them through the Mydlink portal. This works fine (tm) most of the time, unless you try to access them from a Ubuntu system. Then mydlink.com gives this error:

\n
\n

Unsupported Browser or Operating System Detected!

\n

You are using an unsupported browser or operating system and therefore the mydlink web portal may not look, behave or function as intended.

\n
\n

Looking at the list of available options, it appears that the only operating systems that are supported are Mac and Windows. Is there a way to access the cameras from Ubuntu short of setting up a custom web page with dynamic DDNS?

\n", "Title": "Access D-link webcams on Ubuntu", "Tags": "|digital-cameras|", "Answer": "

N.B.: Including this answer here in hopes that it helps someone else.

\n

Yes, it's actually quite simple.

\n

The issue is not so much with the operating system as it is that mydlink.com has not officially supported the Ubuntu operating system. Therefore, all that is necessary is to fake your user agent. There are several options on how to do this:

\n

1. Directly from the Chrome console

\n

Simply follow these steps:

\n\n

2. Install an extension

\n

There are several extensions available for Chrome that will do this for you. Some notable ones are User-Agent Switcher for Chrome, User-Agent Switcher for Google Chrome, and User-Agent Switcher.

\n" }, { "Id": "1198", "CreationDate": "2017-03-10T22:02:47.510", "Body": "

I know that some skills can capture spoken text, such as when adding to to-do lists and shopping lists, and third party skills can also do this, eg. SMS with Molly.

\n\n

So, how do they do this? Is there an API call that captures the recognized text and stores it somewhere?

\n", "Title": "How does one capture the recognised text from Echo?", "Tags": "|alexa|", "Answer": "

Custom skills can capture text and send them to your Skill's API.

\n\n

If you're not completely familiar with how Alexa Skills work, here's a brief summary:

\n\n\n\n

The AMAZON.LITERAL slot allows you to accept virtually any input. Note that currently it is only supported in the English (US) region\u2014English (UK) and German skills cannot use AMAZON.LITERAL.

\n\n

Your intent schema might look like this:

\n\n
{\n  \"intents\": [\n    {\n      \"intent\": \"SaveTodo\",\n      \"slots\": [\n        {\n          \"name\": \"Todo\",\n          \"type\": \"AMAZON.LITERAL\"\n        }\n      ]\n    }\n  ]\n}\n
\n\n

And your sample utterances might be like this:

\n\n
SaveTodo remind me to {fetch the shopping|Todo}\nSaveTodo remind me to {write my English essay|Todo}\nSaveTodo remind me to {buy some dog food tomorrow|Todo}\n
\n\n

When using AMAZON.LITERAL, you need to provide lots of sample utterances\u2014at least one sample for each possible length of input, but ideally more. The Amazon documentation suggests that you should be aiming for hundreds of samples for slots where you could accept various types of inputs.

\n\n

It does seem a little tedious, but if you don't do this, it's unlikely that your skill will recognise text well. You could perhaps generate sample utterances from customer data (so long as personal information is removed beforehand!) so that the most common utterances are in your samples\u2014I suspect Alexa will be slightly biased towards recognising utterances similar to the samples.

\n\n

Amazon discourage AMAZON.LITERAL slots though, and would prefer you to use custom slot types, which require you to list the possible inputs. It's important to remember that:

\n\n
\n

A custom slot type is not the equivalent of an enumeration. Values outside the list may still be returned if recognized by the spoken language understanding system. Although input to a custom slot type is weighted towards the values in the list, it is not constrained to just the items on the list. Your code still needs to include validation and error checking when using slot values.

\n
\n" }, { "Id": "1199", "CreationDate": "2017-03-11T09:01:57.617", "Body": "

I can't figure out how to flash the firmware found on http://www.electrodragon.com/w/ESP8266_AT-Command_firmware. I am using ESP8266 Download tool v3.4.4. When I am adding a firmware in download path it goes on till a fixed percentage (77 or 99..mostly) and gives the error Invalid head of packet, FAST FLASHING ERROR. One more thing is that the tutorials I am using to do this have many download path entries filled in the download tool while mine are just blank. It says upload the combined file to 0x0000. I tried uploading V1.54... file's content (both files) but still got the similar error.

\n\n

\"enter

\n\n

So anyone could just guide me through this as I am a complete newb into this. Please mention any other information needed in comments.

\n\n

PS

\n\n
    \n
  1. I am pretty sure the power supply is adequate as I have been using esp8266 with the old firmware smoothly. I have also connected a 200uF capacitor b/w gnd and vcc(3.3v from FTDI).
  2. \n
  3. I have double checked the connections.(Yes, GPIO0 is grounded).
  4. \n
  5. I flashed an earlier version successfully ai-thinker-v1.1.1.bin but when I connected it to arduino IDE serial monitor it started giving unending gibberish text :P.
  6. \n
\n", "Title": "Can't Flash ESP8266 latest firmware, says \"Fast Flashing error\" and \"Invalid head of packet(' ')\"", "Tags": "|esp8266|", "Answer": "

\"Configuration

\n\n

Okay found the solution.

\n\n
    \n
  1. Firstly I think the detected info block tells the flash size. In Snap it could be seen as 8MBit.
  2. \n
  3. The 1.54 version of the firmware has two files, one for 8Mbit and the other for 32Mbit.
  4. \n
  5. I went for 8Mbit and checked both SpiAutoSet and DoNotChgBin, and volla. It was succesful this time.
  6. \n
  7. I set the baudrate to maximum (1500000) not sure though if it's necessary.
  8. \n
\n\n

PS: Please help improve the answer by mentioning any important information I need to mention in the answer or If anything is wrong.

\n" }, { "Id": "1203", "CreationDate": "2017-03-12T08:35:15.470", "Body": "

Please advise of

\n\n
    \n
  1. any smartwatches that have timers that autorestart features namely where dismiss and reset are in one swipe and swipes are required
  2. \n
\n\n

or

\n\n
    \n
  1. how certain smartwatch alarms can achieve such.
  2. \n
\n\n

Context:

\n\n

I had previously asked about Pomodoro or HIIT timer variants that have an autoreset feature unlike timer but still asked to be dismissed. Do Android smartwatches have something like what I Can't Wake Up has?

\n\n

Example:

\n\n

Let's say it is 11:59am. I want to be reminded of the time at 12:05pm and then be reminded every 5 minutes.

\n\n

Currently, I am using my phone.

\n\n

So either I set a timer for 5 minutes when it's 12pm or I set an alarm to ring at 12:05pm. If timer: I want an option to autorestart after dismiss (so far I haven't found any). If alarm: I want an option to snooze not so easily (with I Can't Wake Up, I can make it so that I have to slide up to snooze or slide down to dismiss).

\n\n

But it's a hassle to get my phone out every time wherein sometimes I have to first unlock the phone first snoozing the alarm (I Can't Wake Up has 'Quit Block', but I have yet to try this).

\n\n

I'm hoping some smartwatch has the ability to go off at 12:05pm or 5 minutes after I set the timer (which would be around 12pm) so that I just have to swipe my watch instead of taking out my phone.

\n\n

I haven't been able to find such features for any smartwatch, but I may be using wrong keywords.

\n", "Title": "Which smartwatches have autorestart timers?", "Tags": "|smart-watches|", "Answer": "

Set your smartwatch alarm to snooze every 5 minutes.

\n" }, { "Id": "1205", "CreationDate": "2017-03-12T10:06:32.267", "Body": "

I have a Mosquitto broker up and running on my Windows machine. I don't remember if I installed it with Web Sockets support (Cause I didn't knew what that was, or if I needed it). But seeing now my requirement is to use JavaScript (Paho) to connect to MQTT, I want to know how to enable Web Socket support for my existing MQTT broker.

\n\n

I tried editing mosquitto.conf file by adding these lines to the file

\n\n
listener 9001\nprotocol websockets\n
\n\n

but doesn't seems to work. I am attaching an image that might provide a better picture:

\n\n

\"Windows

\n\n

I am not very sure of commands either but they seem to work with default port 1883.

\n\n

So the question is: How do I make it work?

\n", "Title": "How to enable WebSockets on Mosquitto running on Windows?", "Tags": "|mqtt|mosquitto|microsoft-windows|web-sockets|", "Answer": "

Starting from 1.5.1 the windows package support websocket, see changelog https://mosquitto.org/blog/\nYou have just to edit mosquitto.conf file, specify to use the websocket protocol by adding \"protocol websockets\" (see definition around line 145) and eventually restart mosquitto if you run it as a service

\n" }, { "Id": "1209", "CreationDate": "2017-03-12T14:21:03.343", "Body": "

This came to my attention recently when I found an amazing video on Youtube by:

\n\n
\n

Micheal E. Anderson: Comparing Messaging Techniques for the IoT, OpenIoTSummit, Linux Foundation.

\n
\n\n

The slides for his talk are Available Here

\n\n

On Slide 26 and 41 minutes of the video he is discussing about how (let me paraphrase):

\n\n
\n

Cellular carriers prefer that their IoT consumers use HTML, XML or JSON type of messages since they consume more Data. More Data means they can charge the consumers more money for the service.

\n
\n\n

I understand that a lot of proprietary protocols viz. SigFox, Wireless HART or Z Wave have lower data rates and sending bulky data over such carriers can be an expensive affair.

\n\n

Question

\n\n\n", "Title": "What Messaging Type can be used for Cellular Network Oriented IoT Protocols?", "Tags": "|communication|protocols|", "Answer": "

Are you asking about the protocol or the message format? We often incorrectly use the term protocol when we mean the format of the data. I do this myself, often because the distinction isn't clear to everyone.

\n\n

Messaging protocols used in IoT tend to be fairly compact, at least more so than http and offer significant features that are important in messaging (sessions, flow control, reliability, etc). The message format is the of data in the message that get sent. I assume that this is what you are asking about.

\n\n

The most compact message format is a carefully considered hand-rolled binary format. It is frequently used when in low-bandwidth scenarios when you want to send a few bytes, and know exactly what those bytes look like. For larger messages the disadvantages are significant and, in general, should be avoided at all costs.

\n\n

I went through a detailed assessment on many different data serialisation options. I expected protobuf, messagepack to be fairly compact, which they were. However, my second problem was finding libraries that were maintained and available on a number of different platforms, including C on the device.

\n\n

The format that I settled on, surprisingly, was gzip compressed JSON. It is easy to implement and understand, runs everywhere, and, with the data that I was using, was about the same, or smaller, than other methods.

\n\n

Also beware that if you have a secure channel such as TLS, you're going to consume a chunk of data (>6KB) in TLS handshakes anyway.

\n\n

A few years ago, I expected a format like protocol buffers to dominate, but not much really happened. Probably because of the ease at which json can be written out and parsed (and compressed). I like the look of Flatbuffers, but the advantage is more on parsing speed than being compact.

\n\n

Since you are at the investigation stage, I suggest you write a bit of code on each, using data that is typical to you situation, and do some comparisons. Having hard data when you start helps confirm your choices.

\n" }, { "Id": "1222", "CreationDate": "2017-03-14T19:05:09.080", "Body": "

As far as I know there are 2 general methods for enabling remote (Internet, not LAN) access to IoT devices:

\n\n
    \n
  1. Via a server that the device polls periodically (e.g. MQTT)
  2. \n
  3. Direct remote access
  4. \n
\n\n

I'm assuming the second method is not straight forward as typically consumer devices are sitting behind a home router.

\n\n

My question is this: Roughly what percentage of currently sold IoT devices use which of the following methods to connect to them remotely:

\n\n
    \n
  1. Via a server (device polls the server)
  2. \n
  3. Direct remote access that requires manually configuring a home router to enable port forwarding (or other way that exposes the device)
  4. \n
  5. Direct remote access where the device automatically configures the router via UPnP or other protocol
  6. \n
  7. Direct remote access using a device's static IPv6 address that does not require router setup
  8. \n
  9. Other methods
  10. \n
\n\n

My question is related to consumer IoT devices, such as light bulbs, light switches, locks, thermometers, etc. from trusted manufacturers that are sold today and are installed in homes.

\n\n

Update:

\n\n

Found this answer by @Aurora0001 to another answer on this site about hole punching to enable direct communication between 2 devices residing in different internal networks (e.g. behind a home router). This solution requires a server, but only for the initial handshake.

\n\n

I guess that would add another option...

\n", "Title": "How do consumer IoT devices typically enable Internet connection?", "Tags": "|smart-home|security|networking|", "Answer": "

I think you'll find a fairly high percentage of \"#5, Other\", because the list is missing one of the most common consumer IoT architectures: indirect communications via an in-home gateway.

\n\n

All the other methods you describe have drawbacks in the home: they're hard to configure, they're not secure, or they take a lot of expensive server resources. An in-home gateway avoids those problems for the individual devices, exposing only one device to the internet.

\n\n

The typical gateway serves several purposes. First, it's a protocol bridge. Wireless devices use all kinds of open and proprietary communications protocols, including Z-Wave, Zigbee, dedicated 900 MHz RF, dedicated 433 MHz RF, infrared light, Bluetooth, BLE, ANT+, Crestron, etc. These solve all kinds of niche problems, like per-device cost, battery life, self-configuring mesh networks, rapid response times, insecure communications, simple configurations using minimal storage, etc. This way most consumer IoT devices aren't using IP packets, but instead deliver their data inside much smaller frames in order to preserve battery life. The gateway will convert the proprietary protocol into something more transportable and interoperable with an IP based network.

\n\n

Also, the in-home gateway is a good place to store the rules of the system. If you're going to enable rules like \"if you turn on the light at the top of the stairs, also turn on the entryway light, unless the kitchen light is on,\" you can place the rules in the light switches, a centralized web server, or the gateway. Putting the rules in each light switch makes for a brittle configuration that's hard to set up, change, or manage. Running the rules in a centralized server introduces latency because the message has to be translated to TCP, encrypted, sent across the internet, the action has to be received, decrypted, and translated back to Zigbee. The gateway enables the vendor to solve these problems by providing a single management point to back up and restore, and local processor to run the rules quickly.

\n\n

Security is a big issue: IoT devices need to be cheap, and cheap processors don't have big CPUs and storage for secure encryption functions. Not to mention the desire to avoid the massive expense of developing securely encrypted protocols. So they implement very weak (cheap) security in the consumer devices, or no security at all. They make up for this by only communicating within a very limited range - they only have to reach the in-home gateway. This way, the gateway handles the local unsecured communications, and only one device needs the processing power and storage needed to communicate to the cloud over TLS.

\n\n

Finally, the gateway can provide a convenient single point of human interface to the devices. Most gateways expose a web interface, allowing for GUI-based configuration. Imagine trying to Morse-code-configure a 12 character WiFi password into a device using only one button and one LED. Worse, imagine your company's phone support staff talking each customer through that process.

\n\n

Unfortunately, this still does not answer your question directly. But I expect the gateway architecture to be the most common way consumer-oriented devices connect to the internet.

\n\n

EDIT: In response to your comment about in-home gateways used for IoT devices, there are a few basic kinds: dedicated single purpose, dedicated multipurpose, and general purpose. In addition to the interfaces below, all of them have an Ethernet or WiFi interface to bridge messages to and from an IP network.

\n\n

A dedicated single purpose gateway speaks only to a particular manufacturer's devices. The simplest examples might be a USB dongle that receives data from a single device, like a Fitbit dongle. Other examples include the Philips Hue Bridge (which communicates only with Philips Hue light bulbs); the Liftmaster MyQ Gateway (which communicates only with Liftmaster, Chamberlain, or Craftsman garage door openers); or the Harmony Hub (which communicates with Logitech Harmony remotes and blinks IR to various home theater components.)

\n\n

An example of the dedicated multipurpose hub would be Samsung's SmartThings hub. SmartThings sells a wide variety of home automation devices, but they only speak the SmartThings protocol. The SmartThings hub can also communicate to many other device controllers via IP, and has native IFTTT integration.

\n\n

The general purpose gateways may have some proprietary components, but often support multiple interfaces and can serve as a primary smart home interface. Examples include the Wink Hub (which communicates to Zigbee, Z-Wave, Lutron, and Kidde RF devices); Vera Edge (which communicates to Z-Wave and Insteon devices, and extends to communicate to external devices).

\n\n

Finally there are also some very active open source efforts in the general purpose home automation domain, including Domoticz and OpenHAB. These are software programs that support communication to IoT devices through dedicated bridge devices (such as a Z-Wave USB dongle or a Zigbee radio), implement rules, and offer extensive integration capabilities such as IFTTT, MQTT, and others.

\n" }, { "Id": "1233", "CreationDate": "2017-03-17T16:26:12.297", "Body": "

On CNet, there's a report about Samsung UNF 8000 smart TVs being vulnerable to a hack developed by the CIA:

\n\n
\n

In June 2014, the CIA and UK's MI5 held a joint workshop to improve the \"Weeping Angel\" hack, which appears to have specifically targeted Samsung's F8000 series TVs released in 2013.

\n \n

A \"Fake-Off\" mode was developed to trick users into thinking their TV was off (by turning off the screen and front LEDs), while still recording voice conversations. Based on what we know about the TV, the hack would have tapped into the microphone located in a TV's accompanying remote.

\n
\n\n

I've read 'Can I monitor my network for rogue IoT device activity?' which gives some general ideas about how a network could be monitored, but I'm interested in specific ways that I could detect if my TV was infected and transmitting data to the cloud.

\n\n

Is there any way that I could detect if my TV was recording and transmitting audio to a malicious party?

\n\n

I'm thinking about anyone who may have developed a similar attack, too, not just the CIA's specific exploit. A problem I can foresee with the general methods in the linked question is that it might be hard to differentiate between general network traffic and malicious traffic from my TV\u2014is there any way I could tell between them easily?

\n\n

The TV is connected to a Netgear N600 router and I have no special monitoring equipment, but I'm happy to use Wireshark if necessary.

\n", "Title": "Is my Samsung Smart TV vulnerable to the \"weeping angel\" attack?", "Tags": "|smart-home|security|smart-tv|", "Answer": "

To answer your second question, yes, this attack was publicized in 2013 at Black Hat. Two Korean researchers demonstrated an attack that was developed against Android. Attacking TVs was easier than phones, because they didn't have to worry about excessive battery draining giving away the existence of the bug.

\n\n

The link above is to the slide presentation. It has a lot of technical information about how to infect a target remotely, just like any other attack. Some of it might be useful in examining your TV.

\n" }, { "Id": "1242", "CreationDate": "2017-03-19T02:55:10.280", "Body": "

I have a project to automate things in a house.\nI am a developer but a beginner in electronics and IoT.

\n\n

What should I use to communicate wirelessly? Wi-Fi, Bluetooth... Where should I look?

\n\n

I need a cheap, low consumption and tiny solution, for example making extra wireless light interruptor, or, try to do things like local-triangularization with a integrated circuit armlet of my house-mates (there are no prisoners! The house is big and it is for having a \"torch-mode\"\u2014the lights follow you, for energy savings)

\n\n

We also grow food (mushrooms), so optimization can be made on cultures in the future. I also want to open/close some doors.

\n\n

It must be modular so an API at the end can be cool.

\n\n

Is a Bluetooth-integrated circuit on IoT centralized by Raspberry Pi (server) and controllable by Wi-Fi (or directly through Bluetooth) a good thing to look at? What am I missing?

\n", "Title": "Which protocol should I use for automation devices in a home environment?", "Tags": "|smart-home|wireless|", "Answer": "

I would look at some of Nordic SoC's solutions that have integrated protocols. Its a good way to have a chip that would allow you to test different scenarios, Nordic has SoC's with most of the common protocols (Bluetooth, WiFi, IEEE, ANT etc.) in one chipset.

\n\n

I would start with Bluetooth, its the simplest, most versatile IMHO solution. Though I am not sure about local-triangulation, seems like an overkill for your requirements, maybe look into Bluetooth beacons.

\n" }, { "Id": "1256", "CreationDate": "2017-03-21T14:31:16.830", "Body": "

Is there any real implementation of LoRaWAN multicasting? According to v1.0.2 LoRaWAN specification it is possible to send multicast frames but I have not found neither a node nor a network server for doing so. \nNobody knows a way?

\n", "Title": "LoRaWAN Multicasting", "Tags": "|lora|", "Answer": "

Multicasting requires that all the devices you want to talk to are listening, so you have to start with a LoRaWAN class C stack, where devices are always listening. Using a \"group address\" is a trivial modification of the stack: you just have to add some code so that the device filter incoming frames that contains either its address, or the group address.

\n\n

What is complicated is making the communication secure as, if you use a group shared secret key, any device in that group that is compromised gives an attacker control of the whole group. Public key cryptography can be a solution but the math is quite intensive and takes a really long time to compute on a typical small embedded processor.

\n" }, { "Id": "1257", "CreationDate": "2017-03-21T19:18:33.010", "Body": "

I'm trying to answer 2 questions mainly:

\n\n
    \n
  1. What kinds of data are generated from 1 branch of IoT devices namely smart bulbs/smart lighting?
  2. \n
  3. How can we visualize the data generated by the same?
  4. \n
\n\n

I searched online specifically on the same but could not find a lot of useful information. If any of you could help answer this question with relevant resource URLs, it would greatly help me for a school presentation on the same.

\n", "Title": "Data generated from smart bulbs?", "Tags": "|smart-home|lighting|analytics|", "Answer": "

The bulbs, in fact, do generate data. They report their state to central hubs or perhaps, faraway servers, and this data can be queried, or even acted on, like a bulb that autumatically turns off during the day.

\n" }, { "Id": "1262", "CreationDate": "2017-03-21T23:17:34.440", "Body": "

I'm looking for a set of Z-Wave (preferably Z-Wave Plus) components to use in order to merge two 2-way light switch circuits and allow control of both lights from all switch positions. I've looked at Fibaro FGD-212 modules but I don't think they'll meet my needs.

\n\n

I'll give as much detail below as I can about my current setup and what I've investigated.

\n\n

This is for UK home wiring, in case that matters.

\n\n
\n\n

My lounge and stairs leading from it have one ceiling light each and both have two switches (making 4 distinct switches in total, no correlation in placement).

\n\n

I've also been investing (slowly) in Z-Wave equipment, and have started to design a replacement for the switches which would allow:

\n\n\n\n

My initial hope was to use 3 * double switches and a single switch, wire them up and control it all with code, but that was before I learnt how 2-way circuits work.

\n\n

Current setup

\n\n

3-core (plus ground) 2-way light switch circuits for both the stairs/hallway and lounge:

\n\n

\"current

\n\n

Explored approach

\n\n

I stumbled upon a guide to using Fibaro Dimmer 2 modules to achieve this, which would change the circuit to the following:

\n\n

\"Circuit

\n\n

I can see how the wiring and module would work, but my issues with using the module are:

\n\n\n\n

I've considered changing the circuits so that only one of the switches in each physically control the light, but that isn't an option as I'd like to plan for a failure scenario which would leave one of the switches on each circuit useless.

\n\n

Is there a combination of z-wave switches which will enable 2-way switches? It must be a common problem.

\n", "Title": "Z-Wave switches with 2-way switched lights?", "Tags": "|smart-home|hardware|zwave|ac-power|lighting|", "Answer": "

In-line dimmers have to be dimmers rather than switches because they have no neutral return at the switch. This makes any multi-way arrangement nigh impossible.

\n

The 'obvious' homebrew solution is a z-wave relay, and a unit to aggregate 'switch' requests into a control toggle. Probably requires an mcu or SBC to facilitate this, and this would extend to more than 2-way switching. Using a transmit only faceplace, Rx and Tx on your SBC, and Rx at some convenient point in the electrical circuit.

\n

Without the Z-wave requirement, you could pair a dimmer switch with a wire free control. Wire free means there is no requirement to replicate the standard 4 wire, 2 way switch arrangement. All switches have an equal input on the lamp.

\n" }, { "Id": "1274", "CreationDate": "2017-03-22T14:26:47.870", "Body": "

With the advent of several devices that are now considered wearables (fitbits, snapchat spectacles, apple watch, etc) are these devices considered IoT?

\n\n

Does it depend on the type of connectivity? For instance, the Apple Watch has WiFi and fitbits don't. Do they need to integrate with other services in order to be considered IoT?

\n", "Title": "Are wearables considered IoT?", "Tags": "|definitions|wearables|", "Answer": "

My experience is that it rarely matters \u2014 you can try to define precisely what is or isn't IoT, but you're probably wasting your time splitting hairs rather than solving problems.

\n\n

For a good overview of what different groups consider \"IoT\", you might want to read What classifies a device as IoT? \u2014 you quickly see that one question gives you at least ten different and conflicting views, and I'd suggest you ask yourself \"Why do I need to know if it's IoT anyway?\".

\n\n

In direct response to your question, though, I think most authors would consider wearables part of the Internet of Things. Mouser Electronics discuss this in an article on their website, which I found interesting:

\n\n
\n

For example, the wearable devices\u2019 question on many minds these days is \u201cAre wearable devices going to just be peripherals for a smart phone, or is there a more important role for them as part of the Internet of Things?\u201d If we are really moving toward a more pervasive deployment of intelligence into just about everything in our environment, shouldn\u2019t this apply to wearable devices too?

\n \n

[...] The promise of the IoT is based on pervasive connectivity and when associated with large collections of connected devices, significant benefits can accrue. How can wearable devices benefit from this concept too? For example, could your wearable devices interact with the devices of others in a crowd? Would you want to know if someone sitting near you on the train had a high fever?

\n
\n\n

The gist of their article is that the interconnectivity is the key part of whether they'll be part of the IoT\u2014if your device doesn't allow you to aggregate, process and use data through a network, it's probably not an IoT device. For example, a simple pedometer is probably not IoT, but your Fitbit might be!

\n" }, { "Id": "1284", "CreationDate": "2017-03-25T16:37:43.887", "Body": "

According to Cambridge News, a LoRaWAN network has recently been set up in Cambridge for the 'Intelligent City Plaform'\u2014essentially, a smart city platform for IoT devices to sense and influence the environment:

\n
\n

A new LoRa (low power long range) network has also been set up with the University to transfer data flowing from the sensors to the data hub, so that is can be analysed and visualised to plan smart solutions, including making transport systems more reliable and easier to use.

\n

The platform is among the first to collate data, which will allow citizens, third party developers and commercial partners to \u2018test bed\u2019 innovative applications including the new Cambridge mobile travel app, which will be available to download this summer.

\n
\n

I found this page from IoTUK Boost suggesting that the network is only open to people participating in the competition, but the article suggests that any developer might be able to connect.

\n

Could a normal citizen connect to and use the smart city network, or is access restricted to those with prior permission?

\n", "Title": "Can I connect my devices to the Cambridge Intelligent City Platform?", "Tags": "|lora|", "Answer": "

The LoRaWAN network seems to be more an enabling part than the main project aim. They're enabling various university groups (and others) with sensor deployment (and have the capability for city-wide coverage now, albeit at a fairly low bit rate).

\n\n

The back-end of the network is probably more interesting, with the ability to publish/subscribe data in real time. Some of the public data is presented here \nhttp://smartcambridge.org/ (and you can see they're presenting this as a platform, rather than a transport layer).

\n\n

The project seems immediately to be mostly provided by the university (as a research vehicle), and used by the council to guide policy - but that doesn't rule out other applications. Since this is a fairly new innovation, access policies are probably a bit ad-hoc (although security and privacy shouldn't be new to the people involved).

\n" }, { "Id": "1289", "CreationDate": "2017-03-28T11:10:08.517", "Body": "

I'm trying to make a remotely controlled servo motor controller on ESP8266 which is controlled by a server. \nThe problem I'm facing is how to make an asynchronous timer, like tmr.alarm(), but in microseconds. tmr.delay() doesn't work so well, because it stops everything else and is not so accurate.\nYou can get this to work on Arduino, but how to implement this in Lua?

\n", "Title": "Sub milisecond timer for ESP8266 in Lua", "Tags": "|esp8266|", "Answer": "

I have managed to recompile the NodeMCU firmware with us timer enabled:

\n\n\n\n

add right here the line: system_timer_reinit();

\n\n
    \n
  1. ./sdk-overrides/osapi.h\nadd above the line #include_next \"osapi.h\": #define USE_US_TIMER

  2. \n
  3. ./app/modules/tmr.c -> static int tmr_start(lua_State* L){\nchange: os_timer_arm -> os_timer_arm_us

  4. \n
  5. ./app/modules/tmr.c -> static int tmr_interval(lua_State* L){\nchange: os_timer_arm -> os_timer_arm_us

  6. \n
  7. ./app/modules/tmr.c: leave os_timer_arm in int luaopen_tmr( lua_State *L ){as is, otherwise you will get a watchdog reset upon start-up

    \n\n
  8. \n
\n\n

With CPU running at 160MHz, I have managed to sample ADC with 8.3kHz (timer delay of 125uS). If I go faster, the watchdog kicks in.

\n\n

Code:

\n\n
    local mytimer2 = tmr.create()\n    local count = 0\n    local count2 = 0\n    local adc_read = adc.read\n    mytimer2:register(125, 1, function (t2) \n        count = count + 1; count2 = count2 + 1\n        local adc_v = adc_read(0) \n        if (count2 == 500) then \n            count2 = 0\n        end\n        if count == 100000 then\n            mytimer2:stop()\n            print(\"Time at end: \"..tmr.time())\n            print(\"Counter: \"..count)\n        end\n    end)\n    print(\"Time at start: \"..tmr.time())\n    mytimer2:start()\n
\n\n

Output:

\n\n

Time at start: 1

\n\n

Time at end: 13

\n\n

Counter: 100000

\n\n

100.000 reads in 12 sec.

\n" }, { "Id": "1293", "CreationDate": "2017-03-28T19:23:54.837", "Body": "

My main objective is to make my arduino or create a app on android to control lights in the house.

\n\n

So my home has the Nexwell Tukan and I can control the lights and power outlets and more, it also has a LAN card with it so it can be controlled with a mobile phone or a PC, the app is called Nexovision. In that program you can control different things by adding them.

\n\n

So what I need to do is somehow get the packets that the software uses to turn on and off different things and make arduino/the android app send them.

\n\n

\"l.soverom 1\" is the name of my room in Nexwell, 6528 is when the lights are on and 0 is when they are off, my PC's IP is the one with 154 and the IoT's is 75.The first lines are the authentication probably,

\n\n

Here is the link to the WireShark .pcapng.

\n", "Title": "Trying to switch my lights off in my smart home", "Tags": "|smart-home|communication|", "Answer": "

I've not looked at the system in detail, but it looks like it really should have properly encrypted/authenticated transmission. This means that unless you can extract the app's private certificate, or otherwise man-in-the-middle the LAN traffic, you won't be able to just tap into the system as you propose. The first sign of this being done right would be observing the transmission as using TLS (i.e HTTPS rather than HTTP).

\n\n

It may be that there is a key exchange happening when you first authenticate a phone onto the system, maybe you can legitimately use this process to acquire the right identification on your Pi.

\n\n

Does the system have any IFTTT integration? You still shouldn't be able to get direct access, but you can maybe send commands indirectly.

\n" }, { "Id": "1302", "CreationDate": "2017-03-30T14:42:32.703", "Body": "

I've been learning recently about the Smarter FridgeCam. According to their article, one of the features is:

\n\n
\n \n
\n\n

One thing I have been unable to decipher, though. Do you have to manually punch in all those dates on your smartphone, or can you run the date by a scanner and have it automatically add an entry with that date and product?

\n", "Title": "Do I have to manually enter expiry dates with the Smarter FridgeCam?", "Tags": "|smart-home|", "Answer": "

I couldn't find anyone to explicitly confirm it, but I think it's very likely. If you look carefully at the image they provide on their website:

\n\n

\n\n

You can see that the third menu item is 'Add expiry', which suggests that you have to manually do it. Also, note that almost none of the items in the image are in their original packages, and so they don't have their expiry dates written on them at all.

\n\n
\n\n

As an aside, I suspect that detecting expiry dates would be a complicated job\u2014far more difficult than just getting the user to do it. For a camera to be able to detect the expiry date:

\n\n
    \n
  1. The item must be aligned the right way so the date is clearly visible
  2. \n
  3. The text must be large enough for the scanner to read, in an appropriate font
  4. \n
  5. The scanning software must be able to spot which bit is the expiry date on the package (you could perhaps just look for the first date you could see on the packaging, but many items have both Best Before and Use By dates, and might even have production dates on them. That's not even considering the different date formats that could be used!)
  6. \n
\n\n

As you can see, it's technically far easier to just have the user enter the date, and if they've already had to align the packaging the correct way, reading the date isn't much more effort.

\n\n

Maybe in future there will be a standard format to show the use by date... But don't raise your hopes too much!

\n" }, { "Id": "1313", "CreationDate": "2017-04-03T10:36:03.070", "Body": "

I have prototype of the resource constrained-device (8-bit MCU with no-OS firmware), interacting with a web server. I wonder are there any solutions, frameworks or cloud services for updating my device firmware from the web. From my research there is Microsoft IoT Hub, but I am afraid it does not suits for such resource-constrained devices. There is one more solution, I found - mbed Cloud portal, but I am not sure how it works. Can anyone help me via any advice, maybe there are some best practices for implementing firmware upgrade over the air for embedded devices in secure and robust way?

\n", "Title": "Are there any ready cloud services or frameworks for firmware update over the air?", "Tags": "|security|microcontrollers|over-the-air-updates|", "Answer": "

I'll be answering only this part, as I know of no 'out of the box' system for an unknown firwmare.

\n\n
\n

maybe there are some best practices for implementing firmware upgrade\n over the air for embedded devices in secure and robust way?

\n
\n\n

In term of practice, what I would do is as follow:

\n\n

1) Have a very minimal boot loader, something as dumb as possible only responsible to load the firmware with the following constraints:

\n\n\n\n

2) Set your storage to have two \"boot banks\" of reasonable size to handle future evolution and firmware growth.

\n\n

3) Checksum the firmware image after download to ensure it is correct before burning, checksum the bank of destination after burning to again ensure it won't fail booting for a missing bit somewhere.

\n\n

The overlooked point is usually the checksum of the downloaded image before and after burning, resulting in corrupted system written on the device. Using two banks and alternating usually ease the update process.

\n" }, { "Id": "1314", "CreationDate": "2017-04-03T14:16:26.257", "Body": "

I recently ran across an article on arstechnica.com which outlines a recently discovered hack for teddy bears. Apparently, Spiral Toys, the manufacturer of Cloud Pets Stuffed Animals, implemented a security breach in their teddy bears which allowed for over 2 million voice recordings being leaked, as well as the e-mail addresses and passwords of over 800,000 accounts.

\n\n

Does anyone know anything more about this attack? Can these teddy bears be secured with a firmware update or something? How can a Cloud Pets Teddy Bear be kept from being hacked?

\n", "Title": "Keep my Teddy Bear from being hacked", "Tags": "|smart-home|security|", "Answer": "

In general: a fairly common denominator in large scale breaches is the fact that it are not the individual IoT devices (Teddy bears, toys, sensors and what more) that get hacked, but instead the central servers get compromised.

\n\n

Patches or security updates to the IoT device itself won't resolve that...

\n\n

Since the individual IoT devices have only limited compute capacity they uplink over the internet to large scale servers and datacenters operated by the manufacturer for their number crunching.
\nThere data from your devices/toys gets sent, associated with your account and profile, and subsequently stored and processed. Often the algorithms processing your data improve when they get to work with more (aggregated) data and neither the stored original data nor such aggregated data will ever be deleted.

\n\n

Often access to that data proves to be secured insufficiently and when that security gets compromised it is not the data from a single device or user that gets leaked, but from many, if not all customers.

\n\n

That is the case as well in the article you linked to.

\n\n

After such large incidents and data breaches you, as an end-user, might get to see updates in the app/firmware/account homepage allowing you to opt-out of such data collection, but typically that comes with a (significant) reduction in functionality, if they device could actually still operate at all without such central data processing.

\n" }, { "Id": "1320", "CreationDate": "2017-04-05T15:50:47.893", "Body": "

I want to check if UART usb is connected to PC or not from my embedded program. Is there any indication line I can get in usb to connect to interrupt pin to detect from.embedded setup? From PC side I can get indication of usb connected and disconnected. But I want to know from embedded hardware side.

\n\n

Details : Data will be sending continuously from embedded setup to PC through usb as soon as usb disconnected from PC I need to take some action in setup. For that I need to get indication of disconnect. It is time critical task so I can't add any heartbeat to keep both in sink.

\n\n

Is there any other line indication through usb?

\n", "Title": "How to check usb connectivity?", "Tags": "|hardware|system-architecture|usb|", "Answer": "

On Android, a discovery devices functionality is implemented, maybe you can also implement it on your embedded device or at least find an API that provides it.

\n\n
\n

Discovering a device

\n \n

Your application can discover USB devices by either using an intent\n filter to be notified when the user connects a device or by\n enumerating USB devices that are already connected. Using an intent\n filter is useful if you want to be able to have your application\n automatically detect a desired device. Enumerating connected USB\n devices is useful if you want to get a list of all connected devices\n or if your application did not filter for an intent.

\n
\n\n

developer.androind.com source

\n" }, { "Id": "1322", "CreationDate": "2017-04-05T19:08:46.357", "Body": "

Since the emergence of these two Technologies, it can be a near future possibility that blockchain and other forms of crypto-currency will be used more frequently.

\n\n

According to this article:

\n\n
\n

The decentralized, autonomous, and trustless capabilities of the blockchain make it an ideal component to become a foundational element of industrial IoT solutions. It is not a surprise that enterprise IoT technologies have quickly become one of the early adopters of blockchain technologies.

\n
\n\n

Furthermore at the end of the Article, A company named Filament is using BitCoin payment to enable sensors for particular applications in different geographical regions.

\n\n

Are there any open source applications available to peek into for BLOCKCHAIN+IoT currently?

\n", "Title": "Are there any applications where blockchain is used with IoT?", "Tags": "|networking|blockchains|", "Answer": "

I faced interesting discussion started by Theo Priestley at LinkedIn about IoT and blockchain stating:

\n
\n

IoT cannot

\n

-- authenticate millions/billions of service nodes (sensors, devices, etc.),

\n

-- secure data between sensors and the database,

\n

-- provide firmware & operating system protection,

\n

-- manage IoT nodes without servers, nor

\n

-- manage provisioning of IoT services & nodes.

\n

Having said this, blockchain can be adapted to IoT applications -- primarily in asset accounting, general ledger & payments -- assuming IoT assets are allocated and in a static state.

\n
\n

At least for me vision about a good match with these two in a sense of pure IoT technology conjuction was thrown to carbage after this read.

\n

Roger Attick (author of citation) is a source I appreciate about IoT and all new technologies.

\n

Reasoning is that the transaction rate possible by blockchain is so slow (original post by TP).

\n" }, { "Id": "1334", "CreationDate": "2017-04-10T09:33:57.120", "Body": "

A new card in the Google Home app says that multiple users are now supported:

\n\n
\n

Multiple users now supported

\n \n

Now, you and others in your home can get a personalized experience from your Assistant on Google Home.

\n
\n\n

How does my Google Home plan to tell between me and someone else using the device? Will I need to say who I am each time I use the device, or will it recognise my voice and switch automatically?

\n\n

This article subtly hints at the Google Home getting the voice recognition feature, but I couldn't find any authoritative sources confirming it.

\n\n

Is there any information to confirm how the multi-user feature will work yet?

\n", "Title": "How do I switch between users with my Google Home?", "Tags": "|google-home|voice-recognition|", "Answer": "

Google has now announced how to set up multiple accounts on the Google Home:

\n\n
    \n
  1. Click the 'multi-user is available' card in the Google Home app
  2. \n
  3. A list of devices will pop up. Find the correct one, then click \"Link your Account\"
  4. \n
  5. A wizard will then guide you through the steps of adding a new user. To train the device to recognise your voice, you have to say \"OK Google\" and \"Hey Google\" twice, and from then on, your account is linked and your voice is recognised by the device.
  6. \n
\n\n

The voice recognition is apparently done locally on the device through a neural network, and I'm pretty surprised about how little training data the device needs to add a new user; the days of spending hours repeating sentences to train a speech-to-text program are gone!

\n\n

The function is apparently only available in the US at the minute\u2014Google promise that it's coming to the UK soon enough, though.

\n" }, { "Id": "1338", "CreationDate": "2017-04-10T14:57:33.483", "Body": "

I've read a certain amount about the Mirai worm, a virus that attacks Internet of Things devices using default usernames and passwords and essentially is wired to produce a Distributed Denial of Service (DDoS).

\n\n

However, I've recently read about another worm, BrickerBot, also a virus attack on Internet of Things devices. According to this article on thenextweb.com results in a Permanent Denial of Service (PDoS).

\n\n

What is the difference between these two attacks as relates to the denial of service? Otherwise stated, what is the difference between DDoS and PDoS as relates to these IoT attacks?

\n", "Title": "What is the difference between a DDoS attack and a PDoS attack?", "Tags": "|security|mirai|", "Answer": "

DDoS vs. \"PDoS\"

\n

1. DDoS (for reference)

\n

A conventional distributed denial of service attack (DDos) is a class of denial of service (DoS) attacks in which a distributed system (botnet) consisting of nodes controlled via some application (Mirai, LizardStresser, gafgyt, etc.) is used to consume the resources of the target system or systems to the point of exhaustion. A good explanation of this is given on security.SE.

\n

An explanation of how Mirai-controlled botnets accomplish denial of service can be found in an analysis by Incapsula:

\n
\n

Like most malware in this category, Mirai is built for two core purposes:

\n\n

To fulfill its recruitment function, Mirai performs wide-ranging scans of IP addresses. The purpose of these scans is to locate under-secured IoT devices that could be remotely accessed via easily guessable login credentials\u2014usually factory default usernames and passwords (e.g., admin/admin).

\n

Mirai uses a brute force technique for guessing passwords a.k.a. dictionary attacks...

\n

Mirai\u2019s attack function enables it to launch HTTP floods and various network (OSI layer 3-4) DDoS attacks. When attacking HTTP floods, Mirai bots hide behind the following default user-agents...

\n

For network layer assaults, Mirai is capable of launching GRE IP and GRE ETH floods, as well as SYN and ACK floods, STOMP (Simple Text Oriented Message Protocol) floods, DNS floods and UDP flood attacks.

\n
\n

These types of botnets accomplish resource exhaustion resulting in denial of service by using controlled devices to generate such large volumes of network traffic directed towards the target system that the resources provided by that system become inaccessible for the duration of the attack. Once the attack ceases, the target system no longer has its resources consumed to the point of exhaustion and can again respond to legitimate incoming client requests.

\n

2. "PDoS"

\n

The BrickerBot campaign is fundamentally different: instead of integrating embedded systems into a botnet which is then used to orchestrate large-scale attacks on servers, the embedded systems themselves are the target.

\n

From Radware's post on BrickerBot \u201cBrickerBot\u201d Results In Permanent Denial-of-Service:

\n
\n

Imagine a fast moving bot attack designed to render the victim\u2019s hardware from functioning. Called Permanent Denial-of-Service (PDoS), this form of cyber-attack is becoming increasingly popular in 2017 as more incidents involving this hardware-damaging assault occur.

\n

Also known loosely as \u201cphlashing\u201d in some circles, PDoS is an attack that damages a system so badly that it requires replacement or reinstallation of hardware. By exploiting security flaws or misconfigurations, PDoS can destroy the firmware and/or basic functions of system. It is a contrast to its well-known cousin, the DDoS attack, which overloads systems with requests meant to saturate resources through unintended usage.

\n
\n

The embedded systems targeted for permanent incapacitation do not have some application downloaded onto them for purposes of remote control and are never part of a botnet (emphasis mine):

\n
\n

Compromising a Device

\n

The Bricker Bot PDoS attack used Telnet brute force - the same exploit vector used by Mirai - to breach a victim\u2019s devices. Bricker does not try to download a binary, so Radware does not have a complete list of credentials that were used for the brute force attempt, but were able to record that the first attempted username/password pair was consistently 'root'/'vizxv.\u2019

\n

Corrupting a Device

\n

Upon successful access to the device, the PDoS bot performed a series of Linux commands that would ultimately lead to corrupted storage, followed by commands to disrupt Internet connectivity, device performance, and the wiping of all files on the device.

\n
\n

A third difference is that this campaign involves a small number of attacker-controlled devices, instead of many thousands or millions:

\n
\n

Over a four-day period, Radware\u2019s honeypot recorded 1,895 PDoS attempts performed from several locations around the world.

\n

The PDoS attempts originated from a limited number of IP addresses spread around the world. All devices are exposing port 22 (SSH) and running an older version of the Dropbear SSH server. Most of the devices were identified by Shodan as Ubiquiti network devices; among them are Access Points and Bridges with beam directivity.

\n
\n

Summary

\n

Given the number of ways that the BrickerBot "PDoS" campaign fundamentally differs from conventional "DDoS" campaigns like Mirai, using similar-sounding terminology is likely to result in confusion.

\n\n" }, { "Id": "1339", "CreationDate": "2017-04-10T14:59:01.803", "Body": "

I'm currently spitballing ideas for an Internet of Things device. It will be running Linux on a widely available development board, I'm not too concerned about physical security or the end user doing something bad to it however I'd like to secure it from botnets such as the mirai botnet.

\n\n

For things such as the root user account, how should I secure it: should I use key based authentication or just a regular password? I want to be able to give end users access to the root user account upon request or provide it out of the box.

\n\n

I don't want to have any extra user accounts or store any authentication details on my servers in case of a breach.

\n", "Title": "How to secure root on IoT device while remaining open to tinkerers", "Tags": "|security|linux|authentication|", "Answer": "

Assuming you need to handle firmware updates (otherwise, you're insecure by default) then you will also need to sign the updates (and manage updates in a secure way). Otherwise, an attacker can simply present your device with a compromised update package. Clearly, if you do this, you make it impossible for the user to apply firmware updates in the same way, but you don't necessarily need to lock them out entirely (since they have physical access).

\n\n

In addition, you'll need to provide a secure way of provisioning each device with a unique ssh key visible to the end user (but they could be contained on the device if you're accepting physical access means ownership).

\n" }, { "Id": "1348", "CreationDate": "2017-04-10T22:54:47.857", "Body": "

I just setup an Arlo Pro security camera (made by Netgear) and found it interesting that the system didn't explicitly have me configure a WiFi connection. The setup process consists of hard-wiring the base receiver to my network, then simultaneously pressing sync buttons on both the base receiver and camera.

\n\n

So, do Arlo Pro cameras connect via WiFi or some other form of wireless to connect to its base receiver?

\n", "Title": "Does Arlo Pro communicate via WiFi?", "Tags": "|security|networking|digital-cameras|surveillance-cameras|", "Answer": "

If, on the other hand, we look the user guide, we will see that WiFi is providing a 100% cordless connection from the camera to the base station and Ethernet is the way for connecting the whole system to the Internet.

\n\n

The thought of connecting WiFi for Internet is not correct according to that document. Also connecting wire to Ethernet in previous step does not mean that local home WiFi would be involved to pairing process in any way. These are separate steps for separate functions and always base station and cameras share a WiFi that belongs to them only and is created by base station, not your local WiFi station.

\n\n

Answer to the question: cameras connect to base station directly via WiFi.

\n" }, { "Id": "1362", "CreationDate": "2017-04-11T16:54:53.693", "Body": "

If it's possible, how can I set the \"Mode\" for an Arlo device using a SmartThings Routine? Ideally, I would simply set the camera modes when SmartThings goes into various security modes.

\n\n

I've found that I don't really like the Arlo security modes and have created my own notification/recording modes and set them to a schedule in the Arlo app. But I'd prefer it to be triggered by our family's presence like with SmartThings.

\n\n

Looking into SmartThings Routines, it appears that I can \"turn on\" the camera in a routine, but I'm not quite sure what that means. There doesn't appear to be an option to record video through SmartThings except when viewing the device.

\n\n

Update:

\n\n

This SmartThings post helped quite a bit in how to integrate Arlo into routines. hirsti's Step by Step instructions help the most (it didn't seem to have an anchor...)

\n\n

See my answer below for steps

\n", "Title": "Set Arlo Mode with SmartThings", "Tags": "|security|samsung-smartthings|digital-cameras|mobile-applications|", "Answer": "

This ended up working for me:

\n\n\n\n

By turning off/on the camera as a switch in SmartThings, it is essentially the same as turning off/on the Camera switch when configuring camera settings through the Arlo app.

\n\n

Configuring SmartThings/Arlo Camera Status (only needs to be done in SmartThings)\n\"SmartThings\"Arlo

\n\n

Configuring automation routines. Activating in SmartThings, tasks are handled in Arlo mode:\n\"Setting\"Setting

\n" }, { "Id": "1367", "CreationDate": "2017-04-12T16:49:00.067", "Body": "

I have some Bluvision Beeks beacons equipped with temperature sensors. I can adjust their transmit powers. I am wondering if setting a higher transmit power for a particular beacon will result in a better sensor reading than if the beacon were set to a lower transmit power in general. Or is higher transmit power only provided in order to achieve a longer range?

\n", "Title": "How does transmit power influence the accuracy of beacon sensor readings?", "Tags": "|sensors|bluetooth-low-energy|beacons|data-transfer|", "Answer": "

Bluetooth (and pretty much every other transmission protocol in contrast to sensors like radar) are based on digital protocols. This means that the signals are both binary, and protected by error detection/correction codes.

\n\n

So long as the signal is strong enough that there are only a few errors in any one packet, the resulting sensor reading which is sent will not change. Specifically in the case of BLE, there is no error correction overhead in the packets, just a CRC. Any received packet which is errored will not be acknowledged. This causes the packet to be re-sent (so increasing latency as a trade-off for improved typical throughput). (from here, as per @Aurora0001)

\n\n

More power can sometimes cause problems, where you have lots of sensors sharing the same band.

\n" }, { "Id": "1371", "CreationDate": "2017-04-14T15:14:00.093", "Body": "

I have an system where a client (let's call it ClientA) can publish requests to a particular MQTT topic. The broker, in case it matters, is Amazon Web Services. Then I have another client (let's call it MainSubscriber) which is always subscribed to the same topic so that it can pick up requests from ClientA and do some work that, in the end, turns into a database operation. The database, in case it matters, is DynamoDB.

\n\n

Since the MainSubscriber may not be always accessible/online, there is a desire to have a failover subscriber to be the failover backup of the main subscriber. The idea is that if the main subscriber does not handle the request in a timely manner, then the failover subscriber would kick in and do the equivalent work/database operation. The challenge is that the \"work\" and the resulting \"database operation\" must not be duplicated by both main and failover subscribers.

\n\n

Here's a logical system architecture drawing for this system.

\n\n
                   -----> MainSubscriber ----\n                  /                          \\\nClientA --> Broker                            ---> Database\n                  \\                          /\n                   ---> FailoverSubscriber --\n
\n\n

Clearly, there are some challenges with such a system:

\n\n
    \n
  1. How does the main subscriber indicate to the failover subscriber that it is working on the request?
  2. \n
  3. How does the failover subscriber detect that the main subscriber has not picked up the request and needs to start working on it?
  4. \n
  5. How does the failover subscriber then hold off the main subscriber in case it all of a sudden comes back online and picks up the request?
  6. \n
  7. How to deal with synchronicity issues between main and failover subscribers?
  8. \n
\n\n

I would rather not have to reinvent the wheel if an existing solution already exists for such a scheme. So, my first question is whether there is something out there already?

\n\n

If not, then I was thinking of using DynamoDB with Strongly Consistent reads to act as the mediator between the Main and Failover subscriber. So, my second question is whether there any well established schemes for doing this?

\n", "Title": "How can I set up main and failover MQTT subscribers for a job queue with AWS IoT?", "Tags": "|mqtt|aws-iot|aws|", "Answer": "

You might want to look at the concept of dead-letter queues of AWS SQS. From the AWS docs:

\n\n
\n

A dead letter queue is a queue that other (source) queues can target\n for messages that can't be processed (consumed) successfully. You can\n set aside and isolate these messages in the dead letter queue to\n determine why their processing did not succeed.

\n
\n\n

So, if you point the main subscriber to listen from the normal queue and the secondary subscriber to listen from the dead-letter queue, the failover problem should be solved.

\n\n

Also, with this, 1, 2 and 3 of your problems are taken care of. The main and secondary subscribers don't need to talk to each other in this case.

\n\n

Also, building upon Tensibai's answer, make sure your subscriber code is written so as to receive one message at a time if multiple subscribers are listening to the same queue due to the visibility timeout

\n\n
\n\n

Downside would be that it would introduce a delay in processing, messages enter the dead letter queue only after a while.

\n\n

So, in case you wouldn't want that, then you can go ahead with Tensibai's answer. And if you can tolerate that, instead of having an extra Dynamo table for status checks, then you can use this.

\n" }, { "Id": "1386", "CreationDate": "2017-04-19T19:42:10.710", "Body": "

I have a ZWave light bulb and a ZWave 4-button wall switch, both connected to Domoticz installed on a Raspberry Pi.

\n

I'd like the following scenarios :

\n\n

Each buttons overrides the previous action (Button 3 -> Button 1 = on for 30 minutes)

\n

Wiring and programming the buttons is easy, but now, how about the timer ? I'd like to avoid creating a homemade service because I'm afraid of messing with init.d.

\n

I have 3 possibilities :

\n
\n

Domoticz dummy switch

\n

Domoticz allows to create a dummy switch which can change states after some time given in an interface :

\n

\"domoticz

\n

Pros

\n\n

Cons

\n\n
\n

at and atq

\n

at is a linux command to plan an action in time, as simple as

\n
at [when] < [what]\n
\n

Pros

\n\n

Cons

\n\n
\n

Crontab

\n

Crontab is a linux service to plan repetitive tasks. In my case it will be a simple

\n
# check every minute\n* * * * * /path/checktimer.sh\n
\n

Pros

\n\n

Cons

\n\n
\n

To my question :

\n\n", "Title": "Reliable timers for always on/timed/off lamps", "Tags": "|smart-home|domoticz|", "Answer": "

I've found another way quite expensive yet working with external processes such as PHP's system : screen

\n\n

First, install it :

\n
sudo apt-get install screen\n
\n

Then call your function

\n
screen -dm -S taskid bash -c 'sleep 20 && command'\n    -dm : detach process\n    -S  : identify screen by name\n
\n

To list your tasks

\n
ls /var/run/screen/S-www-data (or S-anotheruser, warning it is user bound, or sudo it)\n
\n

To kill it

\n
screen -S taskid -X kill\n    -S : identify screen by name\n    -X : access the screen and perform this command\n
\n" }, { "Id": "1394", "CreationDate": "2017-04-23T13:36:34.087", "Body": "

I recently saw this video, in which some students attached an 'attack kit' to a drone and flew it near office blocks.

\n\n

According to their paper:

\n\n
\n

We managed to\n tie a fully autonomous attack kit below a standard drone,\n and performed war-flying in which we flew hundreds of\n meters away from office buildings, forcing all the Hue lamps\n installed in them to disconnect from their current controllers\n and to blink SOS in morse code

\n
\n\n

They then go on to say:

\n\n
\n

We\n use results from percolation theory to estimate the critical mass\n of installed devices for a typical city such as Paris whose area\n is about 105 square kilometers: The chain reaction will fizzle\n if there are fewer than about 15,000 randomly located smart\n lamps in the whole city, but will spread everywhere when the\n number exceeds this critical mass (which had almost certainly\n been surpassed already)

\n
\n\n

Although it's an interesting thought, I'm not sure whether this is really likely. They say that a reasonable range for the bulbs is 100 m, but later note \"The Philips engineers we talk with stated that in a dense urban\nenvironment, the effective range can be less than 30 meters\". I reran their formula with that figure and came out with needing about 168,000 bulbs.

\n\n

On the other hand, if we go with the 'optimistic' estimate of 400 m, the formula predicts that you'd need less than 1000 bulbs!

\n\n

Obviously, the 400 m estimate is very optimistic, so I'm not inclined to trust it much, especially considering all the radio interference that's likely to be around in an urban environment. Philips only promise 'up to 30 m' range, so 400 m seems incredibly optimistic.

\n\n

Is the 'chain reaction' idea unrealistic/slightly exaggerated? It seems to me to just be an attention-grabbing headline, if my calculations are correct. Is there any evidence to show that the range of the bulbs is nearer 100 m, and hence the chain reaction idea is possible?

\n", "Title": "Is a 'chain reaction' of ZigBee bulbs being hacked feasible?", "Tags": "|security|philips-hue|", "Answer": "

Disclaimer: some of the content is conjecture

\n\n

A summary of the experiment is described in the introduction (page 2). The claim is in bold:

\n\n
\n

Our initial discovery was that the Atmel stack has a major bug in its proximity test, which enables any standard ZigBee transmitter (which can be bought for a few dollars in the form of an tiny evaluation board) to initiate a factory reset procedure which will dissociate lamps from their current controllers, up to a range of 400 meters. Once this is achieved, the transmitter can issue additional instructions which will take full control of all those lamps. We demonstrated this with a real war-driving experiment in which we drove around our university campus and took full control of all the Hue smart lamps installed in buildings along the car\u2019s path. Due to the small size, low weight, and minimal power consumption of the required equipment, and the fact that the attack can be automated, we managed to tie a fully autonomous attack kit below a standard drone,\n and performed war-flying in which we flew hundreds of meters away from office buildings, forcing all the Hue lamps installed in them to disconnect from their current controllers and to blink SOS in morse code.

\n
\n\n

The researchers provide empirical evidence of this in their demonstration that a signal can be sent from a transmitter to a vulnerable device from a distance of up to 400 meters away. The researchers sent signals from transmitters 50, 150, and 350 meters away in their wardriving and warflying tests.(See sections 8.1.1 \"Wardriving\" and 8.1.2 \"Warflying\"). The maximum theoretical effective range of 400 meters is derived from the outdoor Zigbee wireless range:

\n\n
\n

Our novel takeover attack uses a bug in Atmel\u2019s implementationof the ZLL Touchlink protocol state machine (used in Philips Hue lamps) to take over lamps from large distances (up to ZigBee wireless range that can be as far as 70 meters indoors or 400 meters outdoors [14]), using only\n standard Philips Hue lamps.

\n
\n\n

This is quite different from the rather more sensational claim that it is theoretically feasible to seize control of all such Philips lamps by infecting an exploitable lamp with a program that self-propagates directly from lamp to lamp via Zigbee (the claim is in bold):

\n\n
\n

Our new attack differs from previous attacks on IoT systems in several crucial ways. First of all, previous attacks used TCP/IP packets to scan the internet for vulnerable IoT devices and to force them to participate in internet-based activities such as a massive DDOS attack. Since internet\n communication is heavily monitored and can be protected by a large variety of security tools, such attacks can be discovered and stopped at an early stage, at least in principle. Our attack does not use any internet communication at all, and the infections jump directly from lamp to lamp using only unmonitored and unprotected ZigBee communication. Consequently, it will be very difficult to detect that an attack is taking place and to locate its source after the whole lighting system is disabled.

\n \n

Another major difference is that our attack spreads via physical proximity alone, disregarding the established networking structures of lamps and controllers. As a result, such an attack cannot be stopped by isolating various subnetworks from each other, as system administrators often do when they are under attack. In this sense the attack is similar to air-borne biological infections such as influenza, which spread almost exclusively via physical proximity.

\n \n

Finally, previously reported attacks are carried out via linear scans and infections which are all carried out in a star-shaped structure with a centrally located attacker, whereas our chain reaction attack spreads much faster by making each infected lamp the new source of infection for all its adjacent lamps; the attacker only has to initiate the infecting with a single bad lamp, and can then retire and watch the whole city going dark automatically.

\n
\n\n

The researchers support this claim using a mathematical model in which the effective range of the lamps is assumed to be 50 meters:

\n\n
\n

Consider a city whose area is A, and assume that its shape is roughly circular (i.e., it is flat, convex, not too elongated, and without holes). We place N smart lamps at random locations within the city, and define an infection graph by connecting any two lamps whose distance is smaller than D by an edge. The connected components in this graph define the possible infection patterns which can be started by plugging in a single infected light. For a small N all the components are likely to consist of just\n a few vertices, but as N increases, the graph goes through a sudden phase change in which a single giant connected component (that contains most of the vertices) is created. This is the critical mass at which the infection is likely to spread everywhere in the city instead of remaining isolated\n in a small neighborhood.

\n \n

The mathematical field dealing with such problems is called Percolation Theory, and the critical N is called the Percolation Threshold. A good survey of many variants of this problem can be found in [15], and the particular version we are interested in appears in the section on thresholds for 2D continuum models, which deals with long range\n connectivity via overlapping two dimensional disks of radius R, as described in Fig 1. Since two points are within a distance D from each other if and only if the two disks of radius R = D/2 around them intersect, we can directly use that model to find the critical mass in our model: It is the value N for which the total area of all the randomly placed disks (i.e., \u03c0R2N) is about 1.128 times larger than the total\n area A of the city. In other words, N = 1.128A/\u03c0(D/2)2.

\n \n

To get a feeling for how large this N can be, consider a typical city like Paris, which is fairly flat, circular in shape, and with few skyscrapers that can block the available lines of sight. Its total area is about 105 square kilometers [16]. According to the official ZigBee Light Link website [14],the range of ZigBee communication is between 70 meters\n indoors and 400 meters outdoors1. There is probably no single number that works in all situations, but to estimate N it is reasonable to assume that one lamp can infect other lamps if they are within a distance of D = 100 meters, and thus the disks we draw around each lamp has a radius of R = 50 meters. By plugging in these values into the formula, we get that the critical mass of installed lamps in the whole city of Paris is only about N = 15, 000. Since the Philips Hue smart lamps are very popular in Europe and especially in affluent areas such as Paris, there is a very good chance that this threshold had in fact been exceeded, and thus the city is already vulnerable to massive infections via the ZigBee chain reaction described in this paper.

\n \n
    \n
  1. The Philips engineers we talk with stated that in a dense urban\n environment, the effective range can be less than 30 meters
  2. \n
\n
\n\n

Argument

\n\n

The researchers claim that the exploit vector and propagation technique employed in their experiment makes it possible to infect all vulnerable devices in an urban environment can be framed as an argument, which goes something like this:

\n\n
    \n
  1. \n

    the Atmel stack has a major bug in its proximity test, which enables any standard ZigBee transmitter (which can be bought for a few dollars in the form of an tiny evaluation board) to initiate a factory reset procedure which will dissociate lamps from their current controllers, up to a range of 400 meters. Once this is achieved, the transmitter can issue additional instructions which will take full control of all those lamps.

    \n
    \n\n

    and

    \n\n
    \n

    Our novel takeover attack uses a bug in Atmel\u2019s implementationof the ZLL Touchlink protocol state machine (used in Philips Hue lamps) to take over lamps from large distances (up to ZigBee wireless range that can be as far as 70 meters indoors or 400 meters outdoors [14]), using only standard Philips Hue lamps.

    \n
    \n\n

    Premise 1: Philips Hue lamps have a vulnerability that allows them to be exploited over ZigBee.

  2. \n
  3. \n

    Our attack does not use any internet communication at all, and the infections jump directly from lamp to lamp using only unmonitored and unprotected ZigBee communication.

    \n
    \n\n

    and

    \n\n
    \n

    According to the official ZigBee Light Link website [14],\n the range of ZigBee communication is between 70 meters\n indoors and 400 meters outdoors1

    \n
    \n\n

    and

    \n\n
    \n

    There is probably no single number that works in all situations, but to estimate N it is reasonable to assume that one lamp can infect other lamps if they are within a distance of D = 100 meters, and thus the disks we draw around each lamp has a radius of R = 50 meters.

    \n
    \n\n

    Premise 2: In an urban environment, Philips Hue lamps have an effective range of 50 meters on average.

  4. \n
  5. \n

    Consider a city whose area is A, and assume that its shape is roughly circular (i.e., it is flat, convex, not too elongated, and without holes). We place N smart lamps at random locations within the city, and define an infection graph by connecting any two lamps whose distance is smaller than D by an edge. The connected components in this graph define the possible infection patterns which can be started by plugging in a single infected light. For a small N all the components are likely to consist of just a few vertices, but as N increases, the graph goes through a sudden phase change in which a single giant connected component (that contains most of the vertices) is created. This is the critical mass at which the infection is likely to spread everywhere in the city instead of remaining isolated in a small neighborhood.

    \n
    \n\n

    and

    \n\n
    \n

    the critical N is called the Percolation Threshold

    \n
    \n\n

    and

    \n\n
    \n

    Since two points are within a distance D from each other if and only if the two disks of radius R = D/2 around them intersect, we can directly use that model to find the critical mass in our model: It is the value N for which the total area of all the randomly placed disks (i.e., \u03c0R2N) is about 1.128 times larger than the total area A of the city. In other words, N = 1.128A/\u03c0(D/2)2.

    \n \n

    To get a feeling for how large this N can be, consider a typical city like Paris, which is fairly flat, circular in shape, and with few skyscrapers that can block the available lines of sight. Its total area is about 105 square kilometers [16].

    \n
    \n\n

    and

    \n\n
    \n

    to estimate N it is reasonable to assume that one lamp can infect other lamps if they are within a distance of D = 100 meters, and thus the disks we draw around each lamp has a radius of R = 50 meters. (Premise 2)

    \n
    \n\n

    Premise 3:

    \n\n
    \n

    By plugging in these values into the formula, we get that the critical mass of installed lamps in the whole city of Paris is only about N = 15, 000.

    \n
  6. \n
  7. \n

    the Philips Hue smart lamps are very popular in Europe and especially in affluent areas such as Paris

    \n
    \n\n

    Premise 4: Popularity of the Philips Hue lamp is indicative of its prevalence in Paris

  8. \n
\n\n

Conclusion:

\n\n
\n

there is a very good chance that this threshold had in fact been exceeded, and thus the city is already vulnerable to massive infections\n via the ZigBee chain reaction described in this paper

\n
\n\n

Analysis (with conjecture)

\n\n

This is a weak argument, since only premise 1 is supported by empirical evidence from reproducible experiments.

\n\n

Challenges to the arguments

\n\n

While it has been demonstrated in the experiment that a vulnerability in the lamps can be exploited over ZigBee at distances of 50, 150 and 300 meters, and the program that exploited the vulnerability successfully propagated itself over ZigBee to the other lamps within range, it was never demonstrated that this propagation would take place outside of the controlled environment in the experiment on the scale claimed by the researchers (that is, city-wide propagation).

\n\n

In the mathematical model based on percolation theory used by the researcher to model a \"typical\" urban environment, the average effective range between lamps was given to be 50 meters. Use of this value as the average effective range can be easily challenged by information readily provided by the manufacturer.

\n\n

The model also relies on a random distribution of lamps throughout the city, but this seems unlikely, since consumers of this product typically have a higher disposable income, and this type of consumer is probably not randomly distributed throughout the city of Paris, the city used for the model.

\n\n

Here is their characterization of a typical city:

\n\n
\n

To get a feeling for how large this N can be, consider a typical city like Paris, which is fairly flat, circular in shape, and with few skyscrapers that can block the available lines of sight. Its total area is about 105 square kilometers [16].

\n
\n\n

The choice of using Paris as model for what it means to be a typical can be challenged easily as well. I'm no expert, but a city with few skyscrapers does not seem to be typical.

\n\n

The problems with the claim are more easily recognized when the claim is examined in terms of conditions required for city-wide propagation of a program that infects Philips Hue lamps over ZigBee.

\n\n

Given that

\n\n
    \n
  1. the city is approximately round AND
  2. \n
  3. the city city is flat AND
  4. \n
  5. the city has few skyscrapers AND
  6. \n
  7. the distribution of Philips Hue lamps across the city is random
  8. \n
\n\n

then the distribution of lamps can be modeled using percolation theory, and if, in addition to this,

\n\n
    \n
  1. there are enough Philips Hue lamps in the city to reach the percolation threshold AND
  2. \n
  3. the average effective range of the lamps in this environment is 50 meters
  4. \n
\n\n

then, according to the model, everyone will die

\n\n
\n

our chain reaction attack spreads much faster by making each infected lamp the new source of infection for all its adjacent lamps; the attacker only has to initiate the infecting with a single bad lamp, and can then retire and watch the whole city going dark automatically.

\n
\n\n

and

\n\n
\n

the city is already vulnerable to massive infections via the ZigBee chain reaction described in this paper

\n
\n\n

Unless these conditions are fulfilled, the model does not work. This is probably part of the reason why Paris was chosen as an exemplar for this paper, even though the research was conducted on the campus of Dalhousie University, Halifax, Canada.

\n\n

To answer the question directly

\n\n
\n

Is a 'chain reaction' of ZigBee bulbs being hacked feasible?

\n
\n\n

In an ideal setting, namely a hypothetical city that conforms to the favorable parameters in the model, a city-wide \"chain-reaction\" is feasible according to percolation theory.

\n\n

The researchers demonstrated that their program automatically self-propagates over ZigBee and infects vulnerable lamps within the range of any one of the previously infected lamps. Everything beyond that is hypothetical.

\n\n

When the researchers say that this would enable them to \"watch the whole city going dark automatically\", this would only be the case if the above conditions were met plus the majority the lights in the city were Philips Hue lamps, which seems to be...unrealistic.

\n" }, { "Id": "1402", "CreationDate": "2017-04-25T12:45:03.510", "Body": "

Connectivity technologies are diverging in the last times to adapt to the continuous demands. In the lack of any GSM standard there are several communication technologies. I'm interested to know if there exist chipsets with support for several technologies. I'm trying to avoid the beta-vhs problem, so I'm looking if there are any chipsets that support multiple protocols; Does someone know of any?

\n", "Title": "Are there any all-in-one chipsets that support LoRaWAN and Sigfox?", "Tags": "|lorawan|sigfox|", "Answer": "

The SX127x family of chips from Semtech are modems supporting both LoRa and (G)FSK modulations (including ultra-narrow band FSK). Coupled with a microcontroller implementing the relevant stack(s), they are already used for with following IoT network protocols: LoRaWAN, Sigfox, Wireless M-Bus, DASH7 and Symphony Link. They could probably also be used with EnOcean, Zigbee sub-gigahertz and Z-Wave but I know no examples of that, and some of those are very closed ecosystems.

\n\n

LoRaWAN stacks are available publicly, license-free. The availability of other stacks is variable, and you will have to do the co-integration of several stacks yourself. If you only need to support one stack at a time, and just want the ability to switch networks by re-flashing your micro, then the problem is simpler. With a modem from TI or SiLabs you can support most of the FSK-based protocols. With a circuit from Semtech you gain additional support for the few LoRa-based protocols.

\n\n

To future-proof a LPWAN product, some modules are already available with a dual LoRaWAN/Sigfox stack: http://www.nemeus.fr/en/nemeus-mm002-2/\nYou cannot connect a module that is not qualified by Sigfox to the Sigfox network, so using this kind of module can save a lot of time and hassle.

\n\n

If a module is too big, there is a SIP combining a SX1276 and a microcontroller: http://www.acsip.com.tw/index.php?action=products-detail&fid1=19&fid2=&fid3=&id=79 but in that case you will have to provide the stack yourself.

\n\n

There are no single chip available right now that combine a microcontroller, a LoRa/FSK modem and a ROMed stack like you can find for Bluetooth. Given the minimum size requirement for an acceptable 868 MHz antenna (> 10 cm\u00b2), gaining a few square millimeter of chip area by doing a RF SoC is probably not worth it.

\n" }, { "Id": "1412", "CreationDate": "2017-04-26T13:59:17.573", "Body": "

From this question comments I found out there is some drastical difference if we speak about a chipset or a module.

\n\n

I could not find a definition, that clearly would distinguish these terms to me, as my English skills nor wikipedia search did not give enough information.

\n\n

So, how would you explain these two that the difference comes clear?

\n", "Title": "What is the difference between a module and a chipset?", "Tags": "|definitions|", "Answer": "

A module is a physical unit, which satisfies some function (e.g. a WiFi module), and is normally made of smaller parts. These parts have been incorporated into one monolithic item. A chipset is a collection of individual elements which have been integrated to provide a function (for example, allowing the exchange of information between a processor, peripherals and memory).

\n" }, { "Id": "1419", "CreationDate": "2017-04-28T12:17:45.903", "Body": "

MQTT is widely used in IoT when it comes to exchanging application data between the end device and the host service. The publish-subscribe model makes it easy to use: no handshaking, negotiating etc (at least above the MQTT protocol layer). It's primarily geared towards data-producers being able to distribute their data easily to consumers.

\n\n

However, when it comes to a central server wanting to configure settings on an end device, I'm not sure that the model is very suitable. The server will want to send a command to the device and wait for a response back (e.g. read a specific setting, wait for response), which doesn't really suit MQTT's publish-subscribe model.

\n\n

I was wondering whether there are any existing protocols that are geared towards send and receiving commands and configuring remote devices?

\n", "Title": "Protocol for configuring IoT device settings", "Tags": "|networking|mqtt|protocols|", "Answer": "
\n

I was wondering whether there are any existing protocols that are\n geared towards send and receiving commands and configuring remote\n devices?

\n
\n\n

Yes, there is a better protocol for device management in IoT.\nIt is LwM2M - It is much more efficient than MQTT and above COAP, MQTT and HTTP.

\n\n

LwM2M comes with a well-defined data and device management model, offering a variety of ready-to-use standard objects (IPSO Smart Objects), connectivity monitoring, remote device actions and structured FOTA and SOTA updates, whereas in MQTT these features are entirely vendor and platform-specific. What follows is that with MQTT, firmware updates or any other management features must be created from scratch. Contrastingly, LwM2M offers firmware upgrades as one of its basic functionalities, so there is no need to invent any new building blocks for communication.

\n\n

Here you have comparison MQTT vs LwM2M and whole crash course.

\n" }, { "Id": "1425", "CreationDate": "2017-04-30T20:36:26.087", "Body": "

This question asks, amongst other things, if there is a big learning curve between using Python on a Raspberry Pi to prototype an endpoint, and using a microcontroller.

\n\n

Clearly there is a big improvement in power consumption (at the cost of reduced processor throughput) so there are good reasons to take the MCU approach for a product which needs to be battery powered.

\n\n

One of the potential reasons to stick with a single-board computer which runs Linux is that there is no new software to learn (above python or similar) assuming the application can be written in a high level language (where there should be plenty of standard libraries).

\n\n

On an embedded development platform, the likely choices are C++ (mbed or arduino), or micropython. My impression is that these are not significantly different or more complex than writing code to run under Linux - although the platforms do have individual advantages. Have I missed anything which is relevant to a software developer?

\n\n

Specifically, I'm asking about IoT endpoints - so it's not essential to have the full resources of a Linux system for the applications I'm interested in here. It's also worth emphasising that power and latency considerations make the mcu implementation a hard requirement in this type of application.

\n", "Title": "Is there a big jump between prototyping on a Pi, and using a microcontroller?", "Tags": "|microcontrollers|raspberry-pi|", "Answer": "

The difference between developing an application with a Pi can be very different or somewhat similar to developing an application with a microcontroller due to hardware differences as well as software development toolchain differences.

\n\n

There are a wide range of microcontrollers available that are anywhere from 8 bit to 64 bit processors and having anywhere from a few K of RAM to a few gigabytes of RAM. More capable microcontrollers provide a more Pi like experience. Less capable microcontrollers do not.

\n\n

And even with the Pi there are large differences between developing for the Windows 10 IoT operating system versus developing for Raspian, Mate, or other Linux based OS. Windows 10 IoT requires a development PC using a Visual Studio toolchain with remote debugger targeting the Universal Windows Program (UWP) environment. Development for Raspian or Mate can actually be done on a Pi with the tools available on the Pi.

\n\n

The Constrained Application Protocol is used for small, constrained devices being used with the Internet of Things environment. To get an idea of the variety of microcontroller hardware and software, this page on the CoAP protocol implementation provides an idea of the environment it is targeting. It mentions the Contiki operating system which I have vaguely heard of along with better known OSs such as iOS, OSX, and Android. Programming languages mentioned are Java, JavaScript, C, C#, Ruby, Go, Erlang, Rust, and Python.

\n\n

The tool chain used for development with microcontroller varies depending on the manufacturer as well as what kinds of resources are available from development communities and open source initiatives. In some cases you get a cross assembler, in other cases you get a C cross compiler, and in other cases you get a nice tool chain with all the bells and whistles and emulators and such similar to the Visual Studio toolchain for Windows 10 IoT.

\n\n

The actual development environment for a microcontroller may involve using an EEPROM programmer and the software tools to create a new image and push it to the device or the device may have the necessary connectivity to allow a new image to be downloaded over a serial connection or over a network connection.

\n\n

My impression is that most microcontrollers have a C cross compiler though the compiler may only support older standards such as K&R or maybe C98. C cross compilers often have non-standard keywords for microprocessor specific features for example the far and near keywords for pointers with the old 8080 and 8086 processors with their segmented memory.

\n\n

There are also specialty languages that target microcontrollers such as FORTH programming language. These languages often have a run time design that targets the bare metal so that there is no operating system other than the language run time.

\n\n

Operating system may range from practically non-existent to a bare bones Linux to a specialty OS such as freeRTOS or Windows Embedded or a full blown Linux or Microsoft Windows. See this SourceForge project MINIBIAN for Raspberry Pi. See as well this eBook, Baking Pi: Operating Systems Development which describes the development of a rudimentary OS for Raspberry Pi in assembler.

\n\n

This article from Visual Studio Magazine, Programming the Internet of Things with Visual Studio, provides an overview of the many different devices available followed by an overview of using the Visual Studio IDE for development for Linux as well as Windows.

\n\n
\n

There's a huge and growing universe of off-the-shelf, programmable,\n networkable microcontroller devices available now. At a very low level\n you have a variety of simple 16- and 32-bit devices from a variety of\n traditional chip makers like Texas Instruments. (I played a bit with\n the SensorTag development kit and it's a lot of fun, making me think\n the Watch DevPack might be a great learning toolset, too.)

\n \n

Some better-known microcontroller devices include Arduino, BeagleBoard\n and Raspberry Pi. These environments all have extensive community\n support and are ready to plug in to a huge number of ready-made\n external sensors, motors, servos and whatever else you might imagine.\n Adafruit, the electronics learning superstore founded by Limor\n \"Ladyada\" Fried, provides all sorts of peripherals for these boards,\n along with its own line of lightweight Feather development boards.

\n
\n\n

...

\n\n
\n

The most interesting universe of devices for developers familiar with\n the Microsoft .NET Framework and Visual Studio may be Windows 10 IoT\n Core-compatible environments. These are x86 and ARM-powered devices\n that support Universal Windows Platform (UWP) apps written in a\n variety of languages including C#, Visual Basic, Python and\n Node.js/JavaScript. Windows 10 IoT core supports devices including\n Raspberry Pi, Arrow DragonBoard 410C, Intel Joule and Compute Stick\n and MinnowBoard. There are also interesting product platforms, such as\n the Askey TurboMate E1 wearable.

\n
\n\n

A Specific Example of a Microcontroller application

\n\n

This is an image of a microcontroller board from an automated coffee maker. This appears to be a standard component for automated coffee makers manufactured in China. The web site for the manufacturer is printed on the PCB.

\n\n

The image is composed of two views. The view on the left is the back of the board containing the microcontroller and supporting circuitry. The view on the right is the front of the board with the LCD screen and a set of buttons which are used to set the current time and to perform actions such as programming a start time, etc.

\n\n

The view on the right fits into a carrier which then fits into an opening in the front of the coffee maker. The switches on the lower PCB are actuated with rocker arm switches. The LCD, which seems to be special purpose, is used to display the current time and status as well as to display the user interface when changing the settings of the coffee maker. The red LED is used to indicate when the coffee maker is actually making coffee and to indicate when done by turning the illumination back off.

\n\n

\"enter

\n\n

The microcontroller is an ELAN Microelectronics Corp EM78P447NAM (datasheet) which is an 8 bit microcontroller. Some of the basic stats show what a small and minimal device this is however it works nicely for its intended purpose. The intent is to develop software which is then downloaded into the write once ROM as a part of manufacturing.

\n\n
\n

\u2022 Low power consumption:

\n\n
* Less then 2.2 mA at 5V/4MHz\n\n* Typically 35 \u00b5A, at 3V/32KHz\n\n* Typically 2 \u00b5A, during sleep mode\n
\n \n

\u2022 4K \u00d7 13 bits on chip ROM

\n \n

\u2022 Three protection bits to prevent intrusion of OTP memory codes

\n \n

\u2022 One configuration register to accommodate user\u2019s requirements

\n \n

\u2022 148\u00d7 8 bits on chip registers(SRAM, general purpose register)

\n
\n" }, { "Id": "1435", "CreationDate": "2017-05-03T15:38:30.177", "Body": "

Amazon's recently released its new Echo Look device, which is a modified Amazon Echo device with a camera added. One of the new features is a 'style check'. From TechCrunch:

\n
\n

The device also works with the company\u2019s Style Check, a feature of the Echo Look app, which uses machine learning to compare different outfit choices, awarding them an overall style rating.

\n

The app uses a combination of machine learning and advice from experts in the style space. Letting AI pick out your clothing in the morning should be a pretty interesting experiment.

\n
\n

I'm interested in how, exactly, the 'machine learning' determines which clothes are more or less stylish. I would imagine that style is a very subjective measure which is difficult to capture with a computer, but I'm not particularly familiar with developments with regards to using computer vision for fashion.

\n

I'm not holding my hopes too high that Amazon would disclose how their algorithms work, but are there any similar approaches with regard to computer vision and style? Is there any information about how Amazon might have implemented this, or already-known methods?

\n", "Title": "How does the Amazon Echo Look determine your 'style rating'?", "Tags": "|alexa|machine-learning|amazon-echo-look|", "Answer": "

From Amazon publish:

\n\n
\n

\"Submit two photos for a second opinion on which outfit looks best on you based on fit, color, styling, and current trends. Over time, these decisions get smarter through your feedback and input from our team of experienced fashion specialists.\"

\n
\n\n

So, they want to give their two best choices and from them machine will guess with wisdom the best.

\n\n

This gives some hint what the machine should identify and learn to gain the wisdom:

\n\n
    \n
  1. First of all identify which clothes in different pictures are same in real life. Helps to compare to other similar decisions.

  2. \n
  3. Identifying people characteristics to be combined to different type of clothing. Different clothes fit to different kind of people.

  4. \n
  5. Much own opinions of people collected from statistics of selling or the experts, maybe growling through fashion pages.

  6. \n
  7. What kind of clothes belong mostly to your own top 2.

  8. \n
  9. If there is some theory to be found what 'to fit' means to you and others.

  10. \n
  11. Etc
  12. \n
\n\n

edit:

\n\n

Pocket-lint opens the backgrounds and says the feature is originally a premium feature of Amazon's shopping app. I got an impression on the article that the analysis is maybe not quite instant and thus the wisdom behind maybe is only brute human source. Who knows.

\n" }, { "Id": "1441", "CreationDate": "2017-05-08T18:29:23.540", "Body": "

TechCrunch recently ran an article on the \"Internet of Things 2.0\", and seem to say that it will be a major architectural change from what currently goes on.

\n\n

Some examples of what they consider \"IoT 2.0\":

\n\n\n\n

They also say:

\n\n
\n

It might be five years distant, but IoT 2.0 is on its way, with device miniaturisation, better power efficiency and connectivity, more sophisticated system architectures, and new machine learning algorithms all in the pipeline. Says Tcherevik: \u201cIoT 2.0 is all but inevitable.\u201d

\n
\n\n

However, some of the points made there already seem to be considered good practice, so it seem to me that their idea of an \"IoT 2.0\" is mostly just confusing and not very valuable.

\n\n

Is there anything I'm missing, or does the article just bundle together current best practices and call it \"IoT 2.0\"?

\n\n
\n\n

Obviously, it's difficult to speculate on what the future actually will hold, so I don't expect to discuss what could happen, rather: aren't most of the things mentioned already possible, just not adopted for cost-saving/complexity reasons?

\n", "Title": "Is TechCrunch's definition of \"IoT 2.0\" significantly different to current best practices?", "Tags": "|definitions|", "Answer": "

There is a big difference between current best practice, what is practical to implement at a sensible cost today, and the products designed in the past few years.

\n\n

Not too long ago, we saw bluetooth locks appear, kickstarter style, as much as a proof of concept as anything else. As I remember, none were anything other than trivially insecure - but they were useful to explore the possibilities. As always, it will take a couple of product iterations to close out the less obvious holes.

\n\n

The realities of a standardised platform are not quite here. As a recent question identified, a good, standardised, secure firmware-over-the-air \nplatform is yet to emerge. The MCU devices in production do support this now (banked flash, etc). There are several possibilities, but may developers would need to start some research before implementing something today.

\n\n

On cryptography, the larger endpoint devices are capable today (phone CPUs with signed bootloaders and strong isolation between secure code and applications are not new), but there is less choice at the low end.

\n\n

As the article identifies, there is no big step change coming. It will take a while for all of today's best practices to become ubiquitous.

\n\n

Pervasive connectivity is not here yet either - Cambridge only just rolled out a development LoRa network (for researchers). Suitable technology exists, but it's not as available as (for example) assuming nearly every home has better than 200kbps broadband upload capability (contrast now to 5 or 10 years ago).

\n" }, { "Id": "1448", "CreationDate": "2017-05-10T12:57:00.240", "Body": "

I'm a web developer - so IoT is not my speciality at all - and I've been asked to find the cheapest and most efficient way (in this order of priority) to build a gizmo for a sport event (can't be more specific).\nThis is how it should work :

\n\n
    \n
  1. A Competitor wear a wristband carrying his unique ID.
  2. \n
  3. At one place there is a terminal which will scan the wristband once in contact, so organizers will know at what time the competitor arrived to this terminal via a web app.
  4. \n
  5. The Competitor must stay 3secs at the terminal and can't just extend arms forward, they must be at the terminal.
  6. \n
  7. The Competitor is acknowledged that his wristband has been successfully scanned and can now move to the next terminal. \nAnd so on
  8. \n
\n\n

So my question is, what should I use for the wristband and the terminal knowing that the bracelets are throwaways ?

\n\n

EDIT - More details :

\n\n\n", "Title": "Should I use NFC, RFID or something else?", "Tags": "|nfc|rfid|", "Answer": "

How far away from your base computer do the terminals need to be? Does it need to be a relatively real-time system or can the check-ins be cached for a few seconds?

\n\n

If you could get away with the range of wifi, and the potential latency of an mqtt message (a good protocol if you need QOS) I think an esp8266 microcontroller with one of the these RFID readers would be a nearly ideal setup.

\n\n

(I personally have a couple of wemos D1 mini's *note this is not the cheapest they can be found, but I try not to promote knock off's)

\n\n

I've primarily used the NodeMCU firmware, but there's no baked in library for pn532 RFID chips, so you'd have to read/write i2c/spi registers manually. Adafruit has a library for the Arduino IDE, but it only works with i2c (seems under-tested / under-developed for the esp8266)

\n\n

One of the benefits of a setup like this is that you could quite easily make these battery powered with a usb battery bank (watch out because some turn off if they don't sense enough current draw).

\n\n

If I were to build these with parts from aliexpress (super cheap) this would be my shopping list:

\n\n\n\n

Then for deployment, you'd need some sort of decent wifi access point that can handle a bunch of lightweight connections (some have a cap on # of connections) and probably a laptop running the mqtt host and your web app server.

\n" }, { "Id": "1449", "CreationDate": "2017-05-10T14:13:00.953", "Body": "

I've configured my Echo and the associated Alexa app for calling and can call registered users. However, though I've read about the drop-in calling feature, I don't see any way to enable it. How do I enable and use drop-in?

\n", "Title": "How do I enable drop-in calling on Alexa?", "Tags": "|alexa|amazon-echo|", "Answer": "

For android the app, at least mine from June 26, 2017, does not have the option to enable Drop In for specific contacts.\nHowever I installed version 2.1.22 of the iOS version, which has that option. Probably later versions will also work.\nSo now I can use Drop In to reach another specific device in my household.

\n" }, { "Id": "1475", "CreationDate": "2017-05-16T06:23:25.667", "Body": "

I am working on Arduino Nano (32Kb flash memory of which 2Kb used by boot loader, 2Kb SRAM, 1Kb EEPROM).

\n\n

The micro-controller takes input from an electrical device via RS485 module and posts the data read to a remote server using GPRS A6 module. The product is supposed to interact with the remote server, posting data at intervals.

\n\n

I have completed the integration part and the device works fine, collecting data and posting on the server (appx 10-15 Km away). The only challenge I am facing is that if there are 100 such devices and I need to update the firmware, using the remote server (or any other suitable mechanism), how should I proceed with it.

\n\n

I have been through many posts that suggest using another Arduino as ISP, this could be my last approach (as it would increase the final cost of product).

\n\n

Over the air firmware update is still unclear as on Stack Exchange community for low-end micro-controllers. Any discussion can be a great help for many.

\n", "Title": "Remote Firmware Update Arduino Nano", "Tags": "|over-the-air-updates|arduino|", "Answer": "

This question has been answered, but perhaps this will be valuable for others.

\n\n

You can wire an ESP32-based board or module to your Arduino, and use https://vcon.io for a remote OTA. vcon firmware can act as an AVR (and not just AVR) programmer, and reflash your Arduino remotely.

\n\n

Also, as a side-effect, you'll get remote control capability for your Arduino. https://dash.vcon.io cloud service gives you device dashboard and an API for remote control and OTA.

\n\n

Disclaimer: I do represent https://vcon.io product.

\n" }, { "Id": "1479", "CreationDate": "2017-05-16T12:32:35.133", "Body": "

I would like to design a system using Raspberry Pi that sends the sensor's data to server continuously and receive commands from server.

\n\n

Will MQTT suit my needs ?

\n\n

Is there any way to do so if I use Java on my Pi?

\n\n

Edit

\n\n

By continuously I mean that streams like video are continuous and other text based data is sent twice every minute.

\n\n

The sensors are:

\n\n

Humidity sensor - http://www.amazon.in/DHT11-Temperature-Humidity-Sensor-Module/dp/B01HI9G9ZU?tag=googinhydr18418-21&tag=googinkenshoo-21&ascsubtag=710c9d6b-87d0-41e2-b3e0-06a1045769f3

\n\n

A 5MP camera (Webcam connected to USB of the Pi.)

\n\n

LDR(Light and Dark) - Sensor

\n\n

The server is based on a cloud hosting location.

\n", "Title": "Raspberry Pi to send sensor's data to server continuously and receive commands from server", "Tags": "|communication|raspberry-pi|data-transfer|", "Answer": "
\n

By continuously I mean that streams like video are continuous

\n
\n\n

If you are considering continuous video streaming from Pi then LIVE555 Streaming Media is a may serve your purpose. Live555 will provide following:

\n\n\n" }, { "Id": "1482", "CreationDate": "2017-05-16T14:20:47.207", "Body": "

The Setup:

\n\n

I have a Raspberry Pi as the master node which is connected to the internet through a broadband connection, the raspberry pi connects several sensors and other microcontrollers. The Pi is continuously connected to a server at a Cloud Hosting Provider.

\n\n

The Questions are:

\n\n\n", "Title": "How to protect Raspberry Pi from attack in an IoT setup connected through a Broadband network?", "Tags": "|security|networking|raspberry-pi|", "Answer": "

Okay. Given the comments so far, here's how I'd approach it:

\n\n
    \n
  1. Set up DDNS through any competent provider.
  2. \n
  3. Set up OpenVPN on your PI, and route UDP port 1194 (or whatever port you set it up on) from the router to the PI. All external connections to your PI will have to have a properly configured OpenVPN client (you could even use a phone!)
  4. \n
  5. As a secondary measure, secure inbound access on the PI using IPTables. It's a pain in the butt to do by hand, so install Webmin (Debian) to configure it. From here, do a Google search on ways to harden your IPTables configuration against DDOS.
  6. \n
\n\n

You might prefer some other VPN, but I've used OpenVPN for about 10 years now for its incredible flexibility.

\n" }, { "Id": "1492", "CreationDate": "2017-05-18T08:34:14.997", "Body": "

What are the major differences between MQTT and Web Sockets?

\n\n

When using IoT for home automation - control and monitoring access over different devices, which one of them should be used when Rest API based and browser based accessibility is required.

\n\n

I am using Java (Pi4J Library) on a Raspberry Pi 2 B+.

\n\n

I have a setup of several sensors like light and dark, humidity, PID etc.

\n\n

I also have a cloud server where I can send the data if required.

\n", "Title": "What is the difference between MQTT and Web Sockets, and when should I use them?", "Tags": "|communication|monitoring|mqtt|", "Answer": "

They're comparable in that both allow you to have full-duplex communication such that the server can immediately pass data to the client, without the client polling for it (as might be with HTTP).

\n\n

However, Websockets is designed for a simple point-to-point connection between a client and a server. MQTT layers extra abstractions on top of basic message sending, so that multiple interested parties can subscribe to messages that may interest them. Messages can therefore be routed by 'message topic' so that many clients can share a notional queue, where a server can choose to hear all messages from all clients, but may also filter by topic.

\n\n

MQTT has a variety of other useful features, e.g. retained messages, such that subscribers immediately receive the message, and the LWT (Last Will and Testament) which is a message that can be sent automatically if the client abnormally disconnects. In summary, MQTT is 'higher up the stack' offering features and abstractions that a simple Websocket does not.

\n" }, { "Id": "1494", "CreationDate": "2017-05-18T17:28:51.923", "Body": "

I can't get my Onion Omega2 to connect to my ethernet via a static IP address and two DNS addresses. To connect my PC, I have to set the following:
\nIP address: 82.149.xxx.xxx
\nsubnet mask: 255.255.255.0
\ngateway: 82.149.xxx.xxx
\nDNS server: 212.xxx.xxx.xxx, 83.xxx.xxx.xxx

\n\n

Which settings have to be made in /etc/config/network and possibly elsewhere?

\n\n

I tried the following without success:

\n\n
config interface 'wan'                \n    option proto 'static'         \n    option ifname 'eth0'       \n    option ipaddr '82.149.xxx.xxx'  \n    option netmask '255.255.255.0'\n    option gateway '82.149.xxx.xxx'  \n    list dns '83.xxx.xxx.xxx'    \n    list dns '212.xxx.xxx.xxx'\n
\n", "Title": "Connect Onion Omega2 to static ethernet", "Tags": "|networking|ethernet|ip-address|onion-omega2|", "Answer": "

The given settings in /etc/config/network are correct. However, to apply the settings, it isn't sufficient to restart the network via /etc/init.d/network restart but also applying the DNS servers via /etc/init.d/dnsmasq restart which had not been done. After that, the Onion Omega2 is able to connect to the internet via its ethernet connection.

\n" }, { "Id": "1517", "CreationDate": "2017-05-25T17:29:36.290", "Body": "

I've heard many people mention that their embedded IoT devices aren't powerful enough to process known protocols, like HTTPS or even TLS security for sockets.

\n\n

Instead, they turn to creating their own protocol to produce a custom communication system that suits their particular use case, although, typically, little time is actually spent developing the protocol, because it's not a particularly important factor. Usually, these homebrew protocols include authentication, security, encryption, etc.

\n\n

This article suggests some of the many pitfalls which would seem to be waiting for anyone who did go down the route of writing their own protocol, and it's well known that you shouldn't try to write your own encryption.

\n\n

Are there ever any cases where you would have to write your own protocol, rather than using an existing, tested protocol? How can you tell if rolling your own is a reasonable idea, rather than a big security risk?

\n", "Title": "Is it ever a good idea to 'roll your own' protocol for IoT device communication?", "Tags": "|security|communication|protocols|", "Answer": "

The times when it makes sense to roll-your-own are quite limited. When it comes to device constraints, its important to look at the whole system performance. For sure, the state of the art does move on but the goal should be optimising endpoint energy performance rather than working out how to cope with a device that hasn't got a good entropy source, or enough memory to support a suitable standard.

\n\n

These are the scenarios where it does make some sense.

\n\n
    \n
  1. Roll your own to learn. Particularly if you're interested in prototyping and making relative comparisons. Once you've learnt, pick the best standard guided by your research.

  2. \n
  3. Improve on the current state-of-the-art. More secure, and more efficient sounds like a win. Might even make you rich.

  4. \n
  5. Roll your own to cope with an environmental constraint (not product choice) which no existing protocol accommodates - but this is likely to be more at the physical/transfer layer. For example, a highly error prone channel or a demand for high resilience to blocking. Even then, the elements you need probably already exist, and just need to be assembled.

  6. \n
\n\n

If it's a product design, you're unlikely to differentiate by saving $0.5 in hardware. You have either a good value-add for your customer, or an insecure product that no one wants even if it sounds cool.

\n" }, { "Id": "1521", "CreationDate": "2017-05-26T12:01:13.597", "Body": "

I read an article recently from ft.com (dated May 21, 2017) which details the progress of the IoT company LightwaveRF. One thing I found very interesting in the article, and that is that they claim that:

\n\n
\n

Last week, LightwaveRF\u2019s shares rose 40 per cent when it announced a partnership with Google Home and Google Assistant. It signed a similar tie-up with Amazon\u2019s Alexa in November. Already 5,000 or so of LightwaveRF\u2019s customers are Alexa groupies.

\n
\n\n

What is the nature of these partnerships? Should we expect LightwaveRF thermostats to start looking more like Nest Thermostats, etc., or are they simply trying to make compatibility between Amazon Echos and Google Homes etc. with their devices smoother?

\n", "Title": "What is the nature of the partnership between LightwaveRF and Google Home / Amazon Echo?", "Tags": "|smart-home|lightwave-rf|", "Answer": "

Significantly, it looks like you don't need to use the LightwaveRF hub, both the Echo and Google Home descriptions avoid mentioning the hub (which is needed if you want to use IFTTT directly for example)

\n" }, { "Id": "1523", "CreationDate": "2017-05-26T12:16:56.980", "Body": "

BACKGROUND

\n\n

The current setup I have for the Raspberry Pi is:

\n\n
USB Webcam -> Raspberry Pi -> Netgear Router -> Local ISP -> Internet\n
\n\n

My ISP gives me a captive portal through which I can login to access the internet and my public IP address is shown something like 203.xxx.xx.xx, when I try to access this IP from the browser, I am taken to the ISP's Captive Portal Page and not allowed to access anything further.

\n\n

There are many other people connected to the same ISP and they are given the same IP too (obviously).

\n\n

The ISP is not ready to allot a dedicated IP or open up any ports for me so that I can configure my Netgear router to forward ports etc.

\n\n

Question

\n\n

I have installed motion on my Pi and I can access it via 192.168.1.3:8080 via my local lan i.e inside my Netgear Router Network. How can I access from outside my Network i.e from a remote location like my office.

\n\n

I would not like to use third party software like teamviewer to relay my whole Pi system over the internet.

\n\n

Is there any way I can upload the stream to a cloud server efficiently and then access it?

\n", "Title": "How to access camera feed of Raspberry Pi out of a local broadband network?", "Tags": "|networking|raspberry-pi|remote-access|", "Answer": "

A lot of ISP providers do not allow residential customers to use port 80 or 8080. Try using a different port number with Motion, and also check with your ISP to see which ports are allowed. You also need to activate port forwarding on the Netgear router, so traffic is routed to the Raspberry Pi

\n" }, { "Id": "1526", "CreationDate": "2017-05-27T12:01:23.037", "Body": "

In March of 2016, Belkin announced that they would not be supporting Apple HomeKit any time in the near future. This meant that integration was basically impossible across Apple devices.

\n\n

However, that article was over a year ago now. Has the status changed at all? Is Belkin planning on integrating HomeKit any time soon?

\n", "Title": "Do Belkin Wemo switches integrate with Apple HomeKit?", "Tags": "|wemo|apple-homekit|", "Answer": "

I'm including the answer with a link to the official article, since I had a hard time finding it. Hopefully, it will help someone.

\n

The status has indeed changed.

\n

Belkin published this article on May 25 in which they explain that integration to Apple HomeKit is indeed in the foreseeable future: Fall 2017 is supposed to see the introduction of the Wemo Bridge, which Bridges between your Wemo switches and your Apple HomeKit. Here's a quote from the article referenced:

\n
\n

Continuing to expand its award-winning Internet of Things ecosystem, Wemo\u00ae, the smart home brand from Belkin International, today announced it plans to enable Apple\u00ae HomeKit\u2122 compatibility to more than two million Wemo solutions on the market. With the HomeKit enabled Wemo Bridge, Wemo users will be able to ask Siri\u00ae on their iPhone, iPad or Apple Watch - \u201cSiri, turn on Wemo\u201d or \u201cSiri, dim the living room lights,\u201d or use the Apple Home app on any of these devices. Users will also be able to include Wemo products into scenes and rooms to work with more than one hundred other HomeKit compatible products and access them while on the go.

\n
\n

So the answer is, yes, you should be able to link your Wemo Switches and Apple HomeKit soon.

\n
\n

Edit:

\n

And... it's live!

\n" }, { "Id": "1535", "CreationDate": "2017-05-28T16:03:19.533", "Body": "

I have this payroll project where my client wants to use biometrics to easily keep track the time attendances of his 400 employees. However, the problem is his company has a high employee turnover rate. \nIt implies that the biometrics fingerprint scanner which usually has only a limited number of available fingerprint templates from 1000 to 2000, the biometrics may eventually run out of memory.

\n\n

The better solution I can think of is to use the payroll system directly as the data store for the biometrics fingerprint scanner. However, I can't seem to find any biometrics fingerprint scanner that allows to send raw biometrics data to the computer for storage.

\n\n

Is there a biometrics scanner available in the market where it offers an SDK which allows developers to interface the scanner directly to the software so that it can be used as a large data store?

\n", "Title": "Is there a biometrics fingerprint scanner that directly sends raw biometrics data?", "Tags": "|biometrics|", "Answer": "

At least Cytron has a model, that says (embhasis my own):

\n
\n\n
\n

https://www.cytron.com.my/p-sn-fpr-uart

\n

It is little bit expensive, but you are supposed to find a cheaper too because there exists so many general purpose fingerprint sdks in Github and they won't be made for one manufacturer only.

\n

Write to Google Search: github fingerprint detection for sdks.

\n" }, { "Id": "1558", "CreationDate": "2017-05-30T13:00:12.917", "Body": "

Recently, Amazon introduced calling between devices with Alexa Calling. With this, you can ask Alexa to call other people who have Echo devices (and have enabled the feature).

\n\n

Is it possible to tell Alexa to call yourself, so that you could use it as an intercom in the house if both devices are linked to the same account?

\n\n

Clearly, calling a device on a different account should work, but I couldn't find any authoritative reference confirming that calling between devices on the same account works.

\n", "Title": "Can I call other Echo devices in my house?", "Tags": "|smart-home|alexa|", "Answer": "

The solution you posted should work fine, but if not, there's also a workaround posted on lovemyecho.com that would allow you to use your Amazon Echo as a 1-way intercom, though it doesn't use the calling system.

\n

Basically, you have to follow these three steps:

\n

1. Record a custom mp3 file

\n

Basically just record an mp3 of whatever you want to communicate to the people in the other room, ie: "Lunchtime!" or whatever.

\n

2. Upload the mp3 file to Amazon Music

\n

Upload the file in a "Messages Playlist", being sure to use good names: they suggest something like MSG_something or something along those lines.

\n

3. Send the message to an Echo / Dot

\n

Go to the Alexa app and choose the message you want to send from the "Music / Books" area. Having used the MSG_something file names and having the files in the playlist should make them easier to find.

\n
\n

Obviously, if you want 2-way communication, this would clearly be impracticable, but the advantage of this system is that if there is a message that you frequently want to send, it doesn't require the person on the other end to "pick up the line."

\n" }, { "Id": "1563", "CreationDate": "2017-05-30T21:38:31.120", "Body": "

I am currently writing a generic telecommand and telemetry library which I plan to use on Zephyr RTOS.

\n\n

Given an input CSV file, it generates some C++ code which can then easily be integrated in the rest of the project. Specifically, it generates a telecommand function and a telemetry function per defined subsystem. Each subsystem has a set of valid TM and TC data points, but those are known only at generation time.

\n\n

How should I go about testing that the library can work? I am thinking about defining stub functions which could check that the correct telemetry is read and the correct telecommand is acted upon.

\n\n

Are stub methods the usual testing methodology for embedded/IoT device testing? If not, what is the more common practice?

\n", "Title": "Testing a telemetry and telecommand interface", "Tags": "|protocols|testing|", "Answer": "

This belongs more on the SQA SE, but here there is better context.\nUsually you would want three levels of testing

\n\n\n" }, { "Id": "1566", "CreationDate": "2017-05-31T14:49:49.643", "Body": "

According to the recent paper A Smart Home is No Castle:\nPrivacy Vulnerabilities of Encrypted IoT Traffic, many smart home devices can be 'fingerprinted' by their connection patterns. Since most devices connect to a small set of URLs when they're invoked, it's possible for an attacker (or an unfriendly ISP) to determine when you use each device.

\n\n

For example, they tracked the traffic going to the Alexa servers from a home router (the URLs they used are in Figure 1 in the paper):

\n\n

\"Alexa

\n\n

They also show that a similar principle can be used to determine when a sleep monitor is used (and hence when you wake up/go to sleep), or when a smart switch is toggled.

\n\n

Clearly, it's disturbing that you can get so much information from a device, despite it being encrypted. It seems harder to get much information from computer traffic, because the servers accessed are much more diverse, but for an IoT device that only 'calls home' to a specific server, it appears easy to track which device was used, and when.

\n\n

Since many countries store metadata such as this, it's feasible that they would be able to use this method themselves to determine your activity, and the same amount of data would be leaked to any network-level attacker.

\n\n

Are there any ways to prevent traffic from being fingerprinted in this way, or at least to reduce the amount of sensitive data that can be extracted?

\n", "Title": "How can I prevent my device leaking sensitive data through traffic fingerprinting?", "Tags": "|privacy|https|tls|", "Answer": "

What are the steps to the privacy leak described?

\n\n

Basically there are three parts in getting the information described in the paper.

\n\n
    \n
  1. An interested party recording the outgoing traffic (2)
  2. \n
  3. Said party being able to split the traffic streams (4.1)
  4. \n
  5. Analyzing the different traffic streams \n\n
  6. \n
\n\n

Recording the outgoing traffic

\n\n

While the attacker is simply assumed in the paper as prerequisite this is already quite the hurdle.

\n\n
\n

Specifically, an adversary in this model can observe and record all wide-area network traffic, including traffic to and from home gateway routers.

\n
\n\n

That's not a lot of potential attackers. Basically, that's the ISP you use to connect to the Internet, the WAN carriers and interested intelligence agencies. Thankfully the one with the easiest access, your ISP, is likely not interested since it doesn't really help their business model. On the other hand, the ISPs are the ones courts can compel to record and provide these information.

\n\n
\n

We assume that ISPs are typically uninterested in performing targeted active attacks on individual users.

\n
\n\n

Whilst not being interested in these attacks, they might very well be forced to provide the information. Of course, that depends on the laws of the country they operate in.

\n\n

Assuming you haven't gotten a court to compel your ISP or attracted the attention of an intelligence agency with the necessary capabilities to record the traffic the likeliest attacker that can use the further steps would be a compromised home router.

\n\n

Splitting traffic streams

\n\n

The split into traffic streams is assumed to be performed by grouping them by the external communication partner, e.g. the services the IoT devices communicate with. The aforementioned attacker obviously has the target IP, after all the information is needed to get the information where they belong.

\n\n

A good possibility that Mawg describes in his answer is the use of a VPN service provider. With the use of a VPN the ISP or otherwise capable attacker cannot deduce the actual target of the communication since every communication is addressed at the VPN service provider. However, that enables another party to be the attacker of this model\u2014the VPN service provider.

\n\n

By using a VPN router you essentially enable another party to be this attacker. The advantage of the TOR network as mentioned in Sylvain's answer is the obfuscation of streams while simultaneously not enabling another player to the proverbial man-in-the-middle. If you're using TOR you'll need either really bad luck in TOR nodes or really really interested parties to enable the attacker to identify the streams.

\n\n

This Wiki article subsection describes the theoretical possibilities to still identify source and target of TOR communications. Although, these methods require serious resources and access to the basic Internet structure, which again brings us back to the same group of potential attackers than mentioned before. However, they would need even more motivation to invest the effort to track that traffic.

\n\n

If you VPN with either solution over jurisdictions (continents or at least countries, not counties or the like) you are likely safe from court proceedings.

\n\n

Summary:

\n\n\n\n

Analyzing the different traffic streams

\n\n

This is actually trivial for anyone who has jumped the first two hurdles. Unless you have a home-made solution the traffic patterns of any IoT device can be easily recorded and afterwards recognized when the data set is sufficient.

\n\n

However, as Sean describes in his answer you can still muddy the waters. If you device sends additional spoofing data or bulk transmits data that does not have to be real-time the pattern analysis gets really complicated.

\n\n
\n\n

1 not a lawyer

\n" }, { "Id": "1571", "CreationDate": "2017-06-01T12:13:15.490", "Body": "

My curiosity was recently piqued by IOTpodcast.com. They mention in the description of the podcast that one of the things to discuss is how smart lighting is saving sea turtles. I then found this article from smartledconcepts.com which states that the lights are tuned to specific frequencies to encourage the sea turtles to go in the right direction. I also found an article here on scientificamerica.com which gives a detailed scientific explanation for how the specific wavelengths direct the turtles.

\n\n

However, I'm more interested in the \"Smart\" part. Are these lights actually \"Smart\" in that they turn on and off at times sea turtles are likely to need them on or off, or in their connections one to another, or are they just called \"Smart\" because they were made by the SmartLED company?

\n", "Title": "Are the Smart Lights saving turtles actually Smart?", "Tags": "|lighting|", "Answer": "

As far as I can tell, there's no IoT-related 'smartness' despite the name. As mico points out, the lights work by producing frequencies of light that humans can see, but that will not affect the turtle breeding cycle.

\n\n

On their website, they promote the energy saving benefits, but don't mention any networking features, as you'd expect from a 'smart bulb' like the Philips Hue, for example.

\n\n

Note that they also say:

\n\n
\n

ILLUMINATE INTELLIGENTLY

\n \n

Reduce your energy consumption, maintenance and servicing costs.

\n
\n\n

That seems like a clear signal to me that their definition of \"intelligence\" is that it uses less energy, and requires less maintenance.

\n\n

So, no, the bulbs aren't IoT connected, as far as I can see; the \"smart\" description is just part of the branding/sales pitch.

\n" }, { "Id": "1575", "CreationDate": "2017-06-02T08:07:07.173", "Body": "

My Zigbee network connection seems to be intermittent. Suddenly all routers will lose connection, for example unable to ping any of the routers. After awhile (e.g overnight or a period of a few days) the network will be up again.

\n\n

Set up:

\n\n\n\n

Currently using channel 25.

\n\n

I want to find out if mounting the Zigbee router on metal partitions(wall) will cause problems for the network. The part of the network that is in another building with no metal partitions seems to be more stable.\nOr are there any other factors that will cause the network to be unstable?

\n", "Title": "Zigbee network intermittent issues", "Tags": "|zigbee|routers|rfid|", "Answer": "

It is important to identify whether the problem is :

\n\n\n\n

The paths to take here are too diverse so you need to narrow\n down the cases.\nSometimes a photo helps to spot the possible deployment issues.

\n\n

Here are some ideas:

\n\n
    \n
  1. A nearby lift can reduce the signal due to the amount of metal in it
  2. \n
  3. Also consider the water as strong signal attenuation, this includes humans, to if that is a crowded area it is better to have the antennas above 2m.
  4. \n
  5. I have seen communication problems and it was due to the wiring was badly bended in a sharp 90 degree angle, this damaged the internal filaments.
  6. \n
  7. Another case I saw is that too many Zigbee gateways (Access Point) can produce conflict since the clients may roam from one to another, this depends on the local distribution of the gateways and the signal strength. If the clients are statically located I would assign them fixed gateways
  8. \n
  9. The proportion between the amount of clients and the amount of Gateways has to me calculated and check.
  10. \n
  11. Don't forget to check also the configuration, maybe there is a bottle neck, this is, many clients use the same subnet, so all the packages are forwarded to a single switch/router. In that case splitting the clients in groups or subnets and diverging the traffic can be a possible solution.
  12. \n
  13. Are all the client devices and gateways running the same firmware (SW) versions? Are all up-to-date
  14. \n
  15. Is there any difference in the security configuration? The encryption and VPN usually slow down the traffic if we compare them with non-protected networks.
  16. \n
  17. I almost forgot to say that the logs from the clients or the Gateway will provide you relevant information about the communication. If all the net goes off abruptly and remains in that state for days, I will go for the gateway logs an see if the is some indications that the communication is lost.
  18. \n
  19. Check for duplicated IP address, the behavior you describe can also be due to 2 devices with the same IP and when one of them is on the net, the other is not communicating at all. Check the routers IPs in this case.
  20. \n
  21. Once I experienced communication lost in a customer and it turned out that the security of the net did not allow clients to reconnect more than 3 times in a row, then the router would ban my client for hours. All this was security configuration in the router/gateway.
  22. \n
\n\n

But of course for all this we need more information about the deployment and the configuration

\n" }, { "Id": "1578", "CreationDate": "2017-06-02T22:51:04.937", "Body": "

I am working on my home automation project. I have a server on digital ocean. I want to put my php code on that server and that server will connect to my Raspberry Pi. How can I pass messages between my server and Raspberry Pi. I want to control my website over internet.

\n\n

Turn On Light (from internet) --> Digital Ocean Server--> My Home Router --> Raspberry Pi

\n\n

And vice versa. So the reason I am writing this question is How to connect these things? Each time server receives message it has to pass message to Raspberry Pi (push) or Raspberry Pi has to check whether there is any message for me or not (poll). But how to do this? How to pass messages between intranet and internet.

\n\n


UPDATE:I have found some stuff. Can anyone tell me whether it is useful or not. Because I am new to this.\n
https://nodejs.org/api/http.html#http_http
\nhttp://aiohttp.readthedocs.io/en/stable/client.html

\n", "Title": "Communication Between Server and Raspberry Pi", "Tags": "|smart-home|networking|raspberry-pi|", "Answer": "

It is said already in other answers that you should use MQTT in your case.

\n

But why?

\n

MQTT is The Protocol if your things are behind a firewall in a private network [1]. All tricks are an outbound rule for port 1833 or with some configuration maybe not even that [2].

\n

How will the things change after taking MQTT and not http?

\n

You will need one block more to your combo of

\n
\n

Turn On Light (from internet) --> Digital Ocean Server--> My Home Router --> Raspberry Pi

\n
\n

Your flow would be:

\n
\n
    \n
  1. Subscribe for Lights on event on Raspberry Pi (message between Broker and RP)
  2. \n
\n

..later night:

\n
    \n
  1. Publish Lights on event on Server (message goes Server -> Broker -> RP)
  2. \n
\n
\n

What is Broker?

\n

Message Broker is a service, that can run in the Digital Ocean and it takes in Publish andSubscription requests. [3]

\n

One such Broker is called Mosquitto and it is open source and easy to install. You install the service and run it. No coding involved, maybe little config. [4]

\n

Publish and subscribe?

\n

If you are familiar with php, you may use it also with Mosquitto [5]. Sample code at least looks straight forward, link contains more examples:

\n
<?php\n\n$c = new Mosquitto\\Client;\n$c->onConnect(function() use ($c) {\n    $c->publish('mgdm/test', 'Hello', 2);\n});\n\n$c->connect('test.mosquitto.org');\n\nfor ($i = 0; $i < 100; $i++) {\n     // Loop around to permit the library to do its work\n     $c->loop(1);\n}\n\necho "Finished\\n";\n
\n

Sources:

\n

[1] https://mongoose-os.com/blog/why-mqtt-is-getting-so-popular-in-iot/

\n

[2] https://stackoverflow.com/questions/32174272/how-mqtt-works-behind-the-firewall

\n

[3] http://www.hivemq.com/blog/mqtt-essentials-part2-publish-subscribe

\n

[4] https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-the-mosquitto-mqtt-messaging-broker-on-ubuntu-16-04

\n

[5] https://github.com/mgdm/Mosquitto-PHP

\n" }, { "Id": "1589", "CreationDate": "2017-06-06T10:58:08.563", "Body": "

As part of one new requirement for web application I am maintaining, we need to send a signal to a device in a factory when certain conditions are met in the application.

\n\n

Sending a signal is simply short-circuiting 2 wires (physical push button). The controlled device is not anywhere near the web server that is hosting application, but it is on the same network. S the idea is to buy boards that already have hardware relays and can accept activation commands via TCP/IP. The ability to secure a board with password protection is a plus but not a must.

\n\n

As one of goals is to have as much standard interfaces as possible. We would like to avoid making our own Arduino- or Raspberry- based boards. I am asking if there are some known and simple industry-grade relays/switches with a TCP/IP interface?

\n\n

Is there a name for such device?

\n\n

I tried to search for it and closest I found is this Chinese board that allows steering 2 relays through TCP/IP.

\n\n

This seems like a long shot to ask here but I am hoping someone worked with similar devices and it would be great if I can get a name of a reputable company that produces such devices.

\n", "Title": "Looking for a relay that can be controlled from .net web application (via TCP/IP)", "Tags": "|hardware|industry-4.0|", "Answer": "

There are a lot of devices out there that allow you to control digital I/O over a network. The difference between this and relay control is very little as you can very easily add a Solid State Relay to a digital output.

\n\n

Products in this space can vary from hobbyist to home to industrial.

\n\n

When selecting, there are some other things you should consider when using these in a business environment.

\n\n\n" }, { "Id": "1590", "CreationDate": "2017-06-06T15:35:48.777", "Body": "

I want to know if there is a way to send files to a website with sm ESP 8266 or any other IoT device. I intend that the IoT device will be the client. A PHP or scripting file on the website will act as the server. It will look like the IoT device is uploading the files to the website.

\n", "Title": "How to write/post files to website server through ESP 8266", "Tags": "|smart-home|networking|esp8266|streaming|", "Answer": "

Try this web server for IoT and realtime GPS tracking, https://iot.electronixforu.com\nIt supports Passthrough mode of ESP8266, it means you can send data as fast as you can (normally 1 second interval), details are available at https://electronixforu.com/iot.html

\n" }, { "Id": "1595", "CreationDate": "2017-06-06T18:12:30.747", "Body": "

I am using an ESP8266 to emulate a WeMo device with wemos and fauxmoESP arduino code found on the internet. Now that I understand the basic interaction of on and off commands, I'd like to add a status response for the state of some pins on the device. It appears that \"turn on\" and \"turn off\" are basic Alexa skills and those work. But there is no \"status\" or \"state\" verbal command.

\n\n

I've found places in the code that handle the eventservice XML for example:<binarystate>1</binarystate> to turn it on, but I cannot find any documentation on getting status or <getdevicestate>. Example of use: If I can't see a light on somewhere, I'd like to query the device to see if it is on or off.

\n\n

Since the device emulates a belkin on/off switch, the setup.xml packet only has:

\n\n
<service>\n    <serviceType>urn:Belkin:service:basicevent:1</serviceType>\n    <serviceId>urn:Belkin:serviceId:basicevent1</serviceId>\n    <controlURL>/upnp/control/basicevent1</controlURL>\n    <eventSubURL>/upnp/event/basicevent1</eventSubURL>\n    <SCPDURL>/eventservice.xml</SCPDURL>\n</service>\n
\n\n

and the basic event is not enough to get status or further capability.

\n\n

This is all done without writing an AWS skill and is handled with direct dialog on the local LAN between the ESP8266 webserver and fauxmoESP to and from the Echo Dot. I can see the packets by sniffing the LAN (wireless)and believe it would be straightforward to add more capabilities if I could find the documentation on the control messaging XML packets.

\n\n

Where can I find these control XML dialog templates and hopefully examples of how to use them? I am getting the sense that this can only be accomplished by using an AWS skill but it seems so unnecessary. Can someone give me some guidance here?

\n\n

Also, what is the utterance for Alexa to check status of a device? It could be that there is no built in utterance for this and I will need to write an AWS skill (which I don't want to do if possible.)

\n", "Title": "Getting status from WeMo device using Alexa", "Tags": "|alexa|esp8266|wemo|", "Answer": "

Perhaps the software featureset has changed but I've found that the following works. This is from my DIY code for nodemcu/D1 mini ESP8266 module using esp8266 webserver listening for local UDP broadcasts. I noticed in the Alexa calls to /upnp.control/basicevent1 that the requests were changing subtly. It all boils down to the same event, but the xml of the request has either <SetBinaryState> or <GetBinaryState>

\n\n

So long as you're holding state in your sketch, something like this will work...

\n\n
 void Switch::handleUpnpControl(){\n\n  Serial.println(\"########## Responding to  /upnp/control/basicevent1 ... ##########\");      \n\n  String request = server->arg(0);      \n  Serial.print(\"request:\");\n  Serial.println(request);\n\n  if (request.indexOf(\"<u:SetBinaryState\") > 0) {\n    Serial.print(\"Got setState update...\");\n    if(request.indexOf(\"<BinaryState>1</BinaryState>\") > 0) {\n        Serial.println(\"Got Turn on request\");\n    state = 1;\n    onCallback();\n    }\n    if(request.indexOf(\"<BinaryState>0</BinaryState>\") > 0) {\n        Serial.println(\"Got Turn off request\");\n        state = 0;\n        offCallback();\n    }\n    server->send(200, \"text/plain\", \"\");\n  }\n\n  if (request.indexOf(\"<u:GetBinaryState\") > 0) {\n    Serial.println(\"Got inquiry for current state:\");\n     String statusResponse = \n     \"<s:Envelope xmlns:s=\\\"http://schemas.xmlsoap.org/soap/envelope/\\\"     s:encodingStyle=\\\"http://schemas.xmlsoap.org/soap/encoding/\\\">\"\n        \"<s:Body>\"\n          \"<u:GetBinaryStateResponse xmlns:u=\\\"urn:Belkin:service:basicevent:1\\\">\"\n            \"<BinaryState>\" + String(state) + \"</BinaryState>\"\n          \"</u:GetBinaryStateResponse>\"\n        \"</s:Body>\"\n      \"</s:Envelope>\\r\\n\"\n      \"\\r\\n\";\n      Serial.print(\"Sending status response: \");\n      Serial.println(statusResponse);\n      server->send(200, \"text/plain\", statusResponse);\n  }\n}\n
\n" }, { "Id": "1598", "CreationDate": "2017-06-07T13:45:26.017", "Body": "

I am new to IoT platform and after days of reading I am still confused about different types of IoT devices. Currently, I have read about smart devices that connect to the cloud through different ways like MQTT, HTTP, LWM2M, and maybe more. Are the IoT devices are really fragmented (like MQTT devices, HTTP devices) or I just misunderstood about that. If they are divided then what are the characteristics of each type (like smarter, faster compared to other types) ?

\n\n

To be more specific I am doing a study on the Eclipse IoT projects, especially Eclipse Hono. Hono provides different protocol adapters like MQTT and REST, and each of them are meant to connect to a type of device as they've shown in the first drawing in this link Hono. My question is if the MQTT IoT devices are completely different from other types like HTTP in features, or is the difference only about communication protocols? Can a device that marked with \"MQTT\" be swapped to \"HTTP\" or vice versa?

\n\n

It would be great if you could give me some examples of the devices which are categorized as MQTT, HTTP, or LWM2M, so I can visualize more easily.

\n", "Title": "What are the differences between MQTT, HTTP, CoAP devices (besides communication protocol)?", "Tags": "|protocols|system-architecture|", "Answer": "
\n

if the MQTT IoT devices are completely different from other types like HTTP in features, or is the difference only about communication protocols?

\n
\n\n

The difference is at the protocol level. Same device may run MQTT as well as HTTP if required. But, device maker will choose a protocol based on the problem statement.

\n\n
\n

Can a device that marked with \"MQTT\" be swapped to \"HTTP\" or vice versa?

\n
\n\n

Yes, you can make your own device based on RPi board with a temperature and a humidity sensor attached. And you can choose to report the sensor data over MQTT/HTTP as required.

\n\n
\n

Eclipse Hono

\n
\n\n

Hono's goal is to enable connection to a large numbers of IoT devices to a back end and enable business application's interaction with devices in an uniform way regardless of the device communication protocol. The Hono adapters help to achieve this. The Hono server interacts with the MQTT/HTTP enabled devices via the respective adapter, that way any type of device can be deployed on the filed. The application interacts with the Hono Server using AMQP.

\n\n

Now to put this in perspective, lets say you are writing an application that sits in the control room of a nuclear power plant showing data from all sensors in a single dash board. The power plat may be sourcing sensors from different vendors. Some sensors may be built to use MQTT whereas others may be built to use HTTP. If your application had to implement adapters for all, that would be tedious. Hono is offering to do that. Your application can be simpler and interacts with Hono server. The sensors on field may go bad and eventually a sensor-A (using MQTT) from vendor-A may be replaced by a sensor-B (using HTTP) from vendor-B. As hono is providing both adapters switching the sensors is trivial as your application is least impacted and still gets the data the same way as usual.

\n" }, { "Id": "1610", "CreationDate": "2017-06-11T11:46:19.750", "Body": "

I have a questions as stated in title - I hope it will not be \"primarily opinion based\" since I wanted to ask you if what I have in mind is even doable.

\n\n

I want to create network of couple cameras that could stream from given location to local server.

\n\n

Few requirements for my project:

\n\n\n\n

In the internet I saw simple examples where 1 cam = 1 Raspberry Pi but I don't know if it isn't overkill since that setup costs around 50 euro and you get OS for one job

\n", "Title": "Cheap video streaming platform using Raspberry Pi", "Tags": "|smart-home|raspberry-pi|data-transfer|", "Answer": "

It's easy to build your own platform to stream video/security camera for less that $50. You can use any Pi, buy one used for cheap or a Pi Zero W (it has wireless) for less than $10.

\n\n

Here are a few examples of how this can be achieved using both Raspbian and Windows IoT Core. You can use the Raspberry Pi Camera Module or a Webcam.

\n\n

Raspbian - Streaming to the public internet

\n\n

Windows IoT Core - Streaming on your local network

\n" }, { "Id": "1611", "CreationDate": "2017-06-11T12:08:04.393", "Body": "

I am considering to get an Echo Dot in UK but will use in it a country where Amazon is yet to launch officially. Will Echo even work? If anyone from community has tried something similar can you please list what features on Echo are limited in countries outside the supported list?

\n\n

I intend to use Echo for following thing to begin with:

\n\n\n", "Title": "Does Amazon Echo work in countries where Amazon is yet to launch Echo officially?", "Tags": "|amazon-echo|", "Answer": "

The support is generally pretty good for locations outside of the UK, US and Germany (although it's not officially supported yet).

\n\n

Coomie's experience of using the device in Australia is useful to read\u2014here's a brief summary:

\n\n
\n

What does work?

\n \n

Core - Amazon itself works as long as it has power and internet.

\n \n

Smart home - All the smart home devices that would otherwise work in your country work.

\n \n

Time zones - You can set your timezone to anywhere in the world.

\n \n

Music - It can play music if you would otherwise be able to play it, e.g. Amazon Play, Pandora & Spotify all work for me.

\n \n

Weather - It can get weather for most cities in the world, but you must ask about your city. e.g. \"Alexa, what is the weather in Cairo,\n Egypt?\"

\n \n

More Skills - All the skills that don't require your location have worked for me.

\n
\n\n

However, I've heard that there are workarounds to set your location, despite it not being officially supported. It's a bit involved, but Beebom outline the steps to set a custom location through the Alexa website (alexa.amazon.com). You need to edit the device location, then:

\n\n
\n \n
\n\n

You can find the curl documentation on their website, although it's helpful to be comfortable with the command line if you want to do that.

\n\n

You can just set a location in your device's region, though, if you don't want to go through all of this\u2014location-based services will be wrong, but if you don't mind, it's far easier than sending API requests.

\n\n
\n\n

In summary:

\n\n\n\n

You might want to test Alexa through a different device before buying an Echo to test for yourself. You could try using this Echo simulator after setting up an account with Amazon and following the steps above. It should behave near identically to a real Echo, so you'll get a feel for what works and what doesn't.

\n" }, { "Id": "1626", "CreationDate": "2017-06-13T08:29:20.530", "Body": "

I have a project where I need to create a Wi-Fi mesh network of nodes sharing a distributed mesh database that requires relatively quick search access on each node. I was initially thinking of running this using nodes consisting of ESP8266's (https://github.com/Coopdis/easyMesh) each containing an SD card (to store the database), but I'm concerned that most of the Arduino type code I've seen mostly runs in memory. Does this mean I have to load the \"database\" (In reality probably just a list with 2 or three fields for each record) into memory? I don't want to loop through the list to find the record I'm looking for as I think this will not be efficient. I was hoping to implement some kind of binary search algorithm. Note that entries in this database could go to about 40 000+ entries.

\n\n

My fallback option is to run Windows IoT Core on a Raspberry Pi where I can use C# and possibly even a real database. My issue with this solution is that I have not been able to find an example of running a mesh network using Windows IoT Core.

\n\n

Any thoughts or assistance would be much appreciated.

\n", "Title": "WiFi Mesh on Windows IoT core", "Tags": "|mesh-networks|arduino|microsoft-windows-iot|", "Answer": "

I posted this same question to the Microsoft forums and got a reply from IoTGirl saying that WiFi Direct is an option: Windows IoT Core WiFi Mesh

\n\n

I need to confirm whether the RPi3B supports WiFi Direct and then also find out if it supports many-to-many via WiFi Direct. (if anyone has any experience on this, feedback would be much appreciated)

\n\n

Hope this keeps this conversation going or at least helps someone else.

\n" }, { "Id": "1631", "CreationDate": "2017-06-14T06:25:23.717", "Body": "

I hope this is the right Stack Exchange to ask this, but since RFID Cards are becoming more common and with new Microcontrollers a lot of home-built Internet of things devices can be RFID i figured here seemed right, if not i am truly sorry and will find somewhere else to ask.

\n\n

I am wondering if it is safe to keep a RFID card in the back of a cellphone case (pressed right against the phones back) and to keep it in the case while using both the phone and the card.

\n\n

Basically i keep my IPhone 6S in a drop-case and want to put my doors RFID proximity card in the back of the case so i don't have to carry an additional card when my phone is typically in my hand.

\n\n

But i don't want to waste the money on the RFID card replacement, and more impotently i don't want to damage the 600+ IPhone.

\n\n

Any help would be appreciated, little tid-bits i found around seems to indicate it would be okay, but i haven't found anything definitive.

\n\n

Thanks in advance, Lin.

\n", "Title": "Is it safe to have a RFID card in the back of a Cellphone case", "Tags": "|rfid|", "Answer": "

RFID card does not emit any electromagnetic signals when it is not close to a reader, it is designed to be passive. It is activated when brought in proximity to the reader. So RFID card cannot really damage your phone while it is in back case of you phone.

\n" }, { "Id": "1634", "CreationDate": "2017-06-14T09:52:43.670", "Body": "

Is there any tool to check my generated packets for being a valid MQTT (3.1.1) packet? Background: I'm using Paho embedded (MQTTPacket) on my microcontroller to generate packets to be send over my ethernet driver. But something seems to be wrong with my packets as they aren't displayed in mqtt-spy.

\n\n

i.e. this packet:

\n\n
0x10 0x1D 0x04 0x00 0x4D 0x51 0x54 0x54 0x04 0x02 0x00 0x14 0x00 0x11 \n0x65 0x76 0x6F 0x6C 0x61 0x6E 0x5F 0x31 0x32 0x33 0x34 0x35 0x36 0x37\n0x38 0x39 0x30 0x30 0x15 0x00 0x06 0x65 0x76 0x6F 0x6C 0x61 0x6E 0x48 \n0x65 0x6C 0x6C 0x6F 0x20 0x65 0x65 0x76 0x6F 0x4C 0x41 0x4E 0x21 0xE0 \n0x00 0x29 0x90 0x20\n
\n\n

is it valid?

\n", "Title": "MQTT packet validity check", "Tags": "|mqtt|paho|", "Answer": "

The message above decoded:

\n\n
MQIsdp{03}{02}{00}{14}{00}{11}evolan_12345678900!{00}{06}evolanHello eevoLAN!\u00e0{00}\n
\n\n

It is mixed Ascii/Hex (topic, clientID and payload are plain ascii, special chars are {hex}.

\n\n

It turns out, that this packet always contains the connect / disconnect option (refer to this table http://docs.solace.com/MQTT-311-Prtl-Conformance-Spec/MQTT%20Control%20Packet%20format.htm#_Table_2.1_-)

\n\n

So I rewrote my method for publishing and split it into separate Connect/Disconnect and Publish methods. (The examples for Paho embedded are very bad documented, so I didn't know that I have always sent a connect/disconnect request within my published message)..

\n\n

Programming details:

\n\n

Connect method

\n\n
MQTTPacket_connectData data = MQTTPacket_connectData_initializer;\n\nchar m_buf[200];\nuint32_t m_len = sizeof(m_buf);\n\ndata.clientID.cstring = \"<your client id string>\";\ndata.keepAliveInterval = 20;\ndata.cleansession = 1;\ndata.MQTTVersion = 4;\nuint32_t len = MQTTSerialize_connect(m_buf, m_len, &data);\n\nyour_ethernet_driver_send_method(m_buf, len);\n
\n\n

Send method

\n\n
char m_buf[200];\nuint32_t m_len = sizeof(m_buf);\n\nMQTTString topicString = MQTTString_initializer;\n\nuint32_t payloadlen = strlen(m_payload);\ntopicString.cstring = m_topic;\n\nuint32_t len = 0;\nlen += MQTTSerialize_publish(m_buf + len, m_len - len, 0, 0, 0, 0, topicString, m_payload, payloadlen);\n\nyour_ethernet_driver_send_method(m_buf, len);\n
\n\n

This works now, I can confirm getting packets on mqtt-spy!

\n\n

So, unfortunately I couldn't find a tool for validating my packet, but what I did is outputting it to a text editor and highlight the special chars as hex values. Then I compared the header bytes to the definition posted above.

\n" }, { "Id": "1647", "CreationDate": "2017-06-15T08:42:56.440", "Body": "

Devices like routers always have a web-based administrative interface that allows you to configure many aspects of the device from a web browser. Obviously, this is essentially a web server running on the embedded devices, where requests to the web site carry out different tasks.

\n\n

I was wondering whether there was a free (both for personal and commercial use) web admin interface that can be installed on embedded devices and allows pages to be added/customised.

\n", "Title": "Web admin panel for embedded Linux IoT devices", "Tags": "|networking|software|", "Answer": "

This mostly depends on how much resources your embedded device has.

\n\n

For example, on ESP8266 devices your options are limited to your own application and C (possibly Lua) programming. There are no resources to run anything else.

\n\n

On a devices like Onion Omega2 with 32Mb flash and 128Mb RAM you can install LuCi (thx!) from OpenWRT, and add more pages using a scripting language like Lua or Python.

\n\n

Finally on a larger devices like Raspberry PI Zero with 512+Mb RAM and multi-gigabyte storage you can use something like Webmin.

\n" }, { "Id": "1671", "CreationDate": "2017-06-18T00:05:59.250", "Body": "

I discussed here how to avoid ping messages in logs. By changing log_type(s) in mosquitto.conf I found that pings are reported under DEBUG LEVEL. I would comment this log type in the conf to avoid pings in log, but sadly PUBLISH entries are under the same log level and they are all relevant for me.\nSo, I wonder if it is there any way to avoid exclusively pings from being logged.

\n\n

Here is my mosquitto.conf:

\n\n
# Place your local configuration in /etc/mosquitto/conf.d/\n#\n# A full description of the configuration file is at\n# /usr/share/doc/mosquitto/examples/mosquitto.conf.example\n\npid_file /var/run/mosquitto.pid\n\n#persistence false\npersistence true\npersistence_location /var/lib/mosquitto/\n\n#log_dest file /var/log/mosquitto/mosquitto.log\nlog_dest syslog\n\nlog_type error\nlog_type warning\nlog_type notice\nlog_type information\nlog_type debug\nlog_type subscribe\nlog_type unsubscribe\nlog_type websockets\n#log_type all\n\nconnection_messages true\nlog_timestamp true\n\ninclude_dir /etc/mosquitto/conf.d\n
\n", "Title": "Mosquitto debug level log - How to keep all entries but PINGREQ/PINGRESP", "Tags": "|mosquitto|", "Answer": "

I had the same requirement as you, so I have modified mosquitto v1.5.3 source code, and added a custom log_type in mosquitto.conf:

\n\n
log_type ping\n
\n\n

Source on Git Hub.

\n" }, { "Id": "1687", "CreationDate": "2017-06-20T02:40:44.997", "Body": "

I have a physically remote thing. It's on someone else's network so I can't get a static IP address for it. What is the best way to track its IP address?

\n\n

I can imagine just publishing a \"heartbeat\" that includes the IP address to some service that will store it for me. If there is some sort of software problem on my thing's end, I could potentially lose it forever in this setup.

\n\n

Is there a more robust way to keep track of the thing's IP address?

\n", "Title": "How to find remote thing's IP address?", "Tags": "|networking|ip-address|", "Answer": "

Your thing probably doesn't have a unique IP address in this context, unless it uses IPv6. It will have an address in a private space, such as 10.0.0.0 \u2013 10.255.255.255 or 192.168.0.0 \u2013 192.168.255.255, this will be behind a NAT gateway (which has the ISP assigned public address).

\n\n

Your thing can probably initiate outbound connections to a server (which can include a 2-way communication), but ultimately, your thing needs to 'phone home' in order for your server to communicate with it. Your thing's IP address shouldn't need to be something you care about, unless one thing needs to talk directly to another. If you need it's IP address, your target's router will need to port forward for you.

\n" }, { "Id": "1693", "CreationDate": "2017-06-22T16:33:31.737", "Body": "

I found out about IOTA, which is apparently a big solution for IoT \nhttps://iota.org

\n\n

But the info about feels a bit abstract to me. I'd like to know what are some specific actual real world use cases of how it can be useful?

\n", "Title": "How is the IOTA cryptocurrency network useful for devices in the IoT?", "Tags": "|cryptography|iota|", "Answer": "
\n

I'd like to know what are some specific actual real world use cases

\n
\n\n

Visit \"Implementing first Industry 4.0 Use Cases with IOTA DAG Tangle\u200a\u2014\u200aMachine Tagging for Digital Twins\" to read about an actual use case by Innogy SE (energy company based in Germany, a subsidiary of the German energy company RWE).

\n\n

To quote briefly from the article:

\n\n
\n

In this blogpost my preferred industry 4.0 use case will be described as a first example where the IOTA protocol can be applied in practice today: IOTA for tagging of physical machines in a digital twin architecture (proof-of-concept). This use case combines some very important decentral technologies such as IPFS, IPLD and BigchainDB for file storage and database layer with the DAG (directed acyclic graph) tangle as well.

\n
\n\n

[...]

\n\n
\n

A practicable first application of a digital twin with IOTA is CarPass for vehicles telematics data. The CarPass solution securely captures telematics data (e.g. mileage, trips, environmental, maintenance data) and stores them immutable in the digital twin for private passenger, fleet or commercial vehicles.

\n
\n" }, { "Id": "1717", "CreationDate": "2017-06-27T14:47:58.173", "Body": "

I have an Amazon account, as does my wife. We share Amazon Prime through household setup, and have added our daughter to our kids list. We purchased an Echo Dot for our daughter, and are likely going to get a few more for around the house.

\n\n

The confusion lies in how to set this up for her. She can't have an Amazon account apparently, so not sure how to even setup the new intercom features, etc. The only way I've seen so far is I have to set it up under my name.

\n\n

I'd like for her to be able to have her own reminders, calendar, features, etc.

\n\n

Can someone please outline a practical way to set this up? I'd rather not sign into my Amazon account and be blasted with advertisements for all the new Pop songs or things to buy, based on her usage of the echo.

\n\n

Update\nIn comments:

\n\n
\n

It would help for non-experts if you're specific about what prevents user specific accounts, or prevents you setting up a sockpuppet account

\n
\n\n

On the apparent issues, or what I ran into was when setting up the echo for the first time, it would not allow me to select my child's account in the list, only mine or my wife's. Sockpuppet? As in a fake account? Was hoping to avoid that route :)

\n", "Title": "A practical guide to Amazon Echo, families with children?", "Tags": "|alexa|amazon-echo|", "Answer": "

From this [1] article from 2015 it tells:

\n
\n

To use multiple accounts with Echo, you\u2019ll need to set up an Amazon Households account, which allows two adults in the same house to share Prime benefits, and allows parents to share content with up to four children. Amazon says it\u2019s not supporting children\u2019s accounts on Echo at the moment, however.

\n

As ZDNet notes, families can now say \u201cAlexa, switch accounts\u201d to cycle between the two adult account holders.

\n
\n

If situation does not have changed, children are blocked by restrictions.

\n

[1] http://www.techhive.com/article/2977784/connected-home/new-update-makes-amazon-echo-more-family-friendly.html

\n" }, { "Id": "1719", "CreationDate": "2017-06-27T23:13:04.407", "Body": "

By reading the iFixit Teardown, one can realize that the AirPods use the W1 chip for wireless communications. Since both Airpods are wireless, there are questions still unanswered.

\n\n

Do both Airpods connect to the phone? Or does one Airpod connect to the phone and the other Airpod? Do both Airpods use Bluetooth technology?

\n\n

From the iOS/Android perspective, is it possible to connect to multiple Bluetooth devices simultaneously?

\n\n

Here is a small prompt from the IFixit Teardown that lists the components inside the Airpods:

\n\n\n", "Title": "How do the AirPods communicate with the phone?", "Tags": "|hardware|bluetooth|bluetooth-low-energy|", "Answer": "

Apple is keeping the technology it uses for AirPods under wraps, hence my answer is best attempt based on the information in public domain.

\n\n
\n

Do both Airpods use Bluetooth technology?

\n
\n\n

Yes, because

\n\n\n\n
\n

Do both Airpods connect to the phone? Or does one Airpod connect to the phone and the other Airpod?

\n
\n\n

I think Apple has implemented its own patent US8768252: Un-tethered wireless audio system for the Airpods. What they may have got in place is a Bluetooth Scatternet with one piconet having phone and one of the pods, and the other piconet has both the airpods.

\n\n

The complete theory of operation is described in the patent. The key concept is captured in the following diagram from the patent. \"Airpod

\n\n

So it seems effectively one of the airpods is actively connected to the phone and the other is in promiscuous mode.

\n" }, { "Id": "1724", "CreationDate": "2017-06-28T18:49:05.157", "Body": "

There are a handful of \"smart ceiling fans\" available that are really expensive (up to $500).

\n\n

Is there an easy way to just add a smart wall switch with a cheap dumb ceiling fan to get the same result? The fans in question are hardwired, not using any plug.

\n\n

Some details:

\n\n\n", "Title": "Ceiling fans - Can I use a smart switch to control a dumb fan?", "Tags": "|smart-home|alexa|apple-homekit|", "Answer": "

You probably need a relay rather than a 'smart light switch'. This is because the light switches frequently rely on leakage through the lamp to power the electronics in the switch. With a fan motor, this might not work as intended. You would also risk some damage in using a dimmer to control the motor.

\n\n

The key difference with a smart relay is that the switching element takes feed and return power directly, providing independent terminals for the load. They can use either mechanical relay or electronic ones - just make sure it is suitably rated for an inductive load.

\n\n

As an example, LW821. You would typically wire this in the ceiling void, and then have the challenge of how to manage the switch - so you might replace the switch with a LW-RF transmitter (confusingly described as a wire free switch) - rather than trying to come up with a 3-way control.

\n\n

You will not be able to mount the relay at the switch location because there is no 'common return/neutral' in a normal switch location. Effectively, you need to install a switched spur (and this most likely means some places require an electrician to certify the work).

\n" }, { "Id": "1731", "CreationDate": "2017-06-29T18:54:49.727", "Body": "

The IFIXIT Google Home tear-down reveals the Marvell Avastar 88W8897 WLAN/BT/NFC SoC.

\n\n\n", "Title": "Does Google Home support any NFC use cases?", "Tags": "|google-home|google-assistant|nfc|", "Answer": "

Currently, there is no support for NFC on the Google Home. I couldn't find an authoritative source to say that there isn't support, but there are no known features as of writing that use NFC capabilities. I'm also not aware of any plans in the near future to use the chip.

\n\n

Note that the chip is used by several other devices (most notably the Xbox One) which do not actually use the NFC capability of the chip at all. And, as noted by IFixIt, the Google Home is internally very similar to the Chromecast, which also has a NFC-enabled chip without actually using it:

\n\n
\n

Marvell Avastar 88W8887 VHT WLAN, Bluetooth, NFC and FM Receiver

\n
\n\n

I would conjecture that the chip is simply the best option that is mass produced for Google's device, and the fact that NFC is supported was not a factor in their chip choice.

\n\n

Similarly, the Google Home has full Bluetooth support, but didn't use it... until very recently. It's a sign that they probably could support NFC in future, but a suitable use case would need to be devised. However, there are no known plans right now.

\n" }, { "Id": "1750", "CreationDate": "2017-07-02T13:23:38.397", "Body": "

Is there a way to train Alexa to use preferred volume levels, such that Alexa

\n\n\n", "Title": "Can Alexa be trained to use preferred volume levels?", "Tags": "|alexa|amazon-echo|smart-assistants|", "Answer": "

At the moment, it is impossible to set a schedule for Alexa's volume, according to this reddit thread. Instead, you must manually tell Alexa to \"set volume to 0\" at night, then set volume to [your preferred value]\" in the morning.

\n\n

I believe you can set the alarm/notification volume separately to the master volume in the app (certainly as of December 2016), but I'm unaware of a setting to change music volume separately to voice volume for Alexa's replies.

\n\n

As always, it may be worth sending feedback requesting this feature if there isn't a solution available; Amazon's team does seem to listen to and prioritise requested features.

\n" }, { "Id": "1752", "CreationDate": "2017-07-03T16:06:13.640", "Body": "

Surveillance camera, such as the Nest Cam, provide live stream of feed. Now given such a device which is always connected and streaming what kind of network security model do such products employ?

\n\n

I assume such products cannot afford to let the stream be eves-dropped or tampered.

\n", "Title": "What kind of network security is employed in live streaming surveillance cameras?", "Tags": "|home-security|surveillance-cameras|nest-cam|", "Answer": "

The default operation of the camera(s) may not be secure. If security features are enabled, it will likely include authorization and encryption to get at the video feed.

\n\n

However, your are best assuming that even with security features enabled - your cameras are not secure. There are back doors baked into these products on many levels before the device is ever assembled. This is widely reported already.

\n\n

If you want to keep something secret, do not record or transmit it.

\n" }, { "Id": "1766", "CreationDate": "2017-07-06T09:57:45.240", "Body": "

I'm trying to develop my first skill and I do not find proper information on how to create a home card

\n\n

https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/providing-home-cards-for-the-amazon-alexa-app#creating-a-basic-home-card-to-display-text

\n\n

says I had to add it to the JSON response, I do not understand where exactly the JSON response is?

\n\n

Any help is much appreciated.

\n", "Title": "How to create a home card in Alexa", "Tags": "|alexa|amazon-echo|", "Answer": "

In the code it could look like this if you're using node.js

\n\n
const LaunchRequestHandler = {\n    canHandle(handlerInput) {\n        return handlerInput.requestEnvelope.request.type === 'LaunchRequest';\n    },\n    handle(handlerInput) {\n        var reprompt = '';\n        const speakOutput = 'Protokollaufnahme gestartet.';\n        return handlerInput.responseBuilder\n            .speak(speakOutput)\n            .reprompt(reprompt)\n            .withSimpleCard('Protokollaufnahme', speakOutput)\n            .withShouldEndSession(false)\n            .getResponse();\n    },\n};\n
\n\n

The card is initialized in the response like this:

\n\n
.withSimpleCard('title', 'content')\n
\n\n

With this it is automatically added to the json output

\n" }, { "Id": "1768", "CreationDate": "2017-07-06T12:55:59.843", "Body": "

Use case:

\n\n
    \n
  1. I have a lot of Utilite2 single board computers to configure

  2. \n
  3. On each device I installed Linaro (Ubuntu-based), booted from MicroSD card. Linaro is recommended OS for Utilite2

  4. \n
  5. On each device I have to do a lot of manual steps (all of them from command line):

    \n\n
  6. \n
\n\n

I would like to automate whole manual work described in point no. 3. I can write a bash script which will do it for me, but I would like to ask you: is there any better way than scripting it?

\n\n

When you work with web-apps on server side, you probably use tools like Chef or CloudFormation templates, to setup servers and clusters. You don't configure each server manually. This approach has another big advantage - you can keep your configuration as code and reuse it for each server. I need to understand how to do this for physical hardware.

\n\n

When it comes to IoT, most resources and presentations which I found focus on the big picture. They show how devices \"talk\" to each other and how system's architecture looks like. But we can not forget that before each device is in the system, it needs to be configured somehow.

\n\n

From my point of view (beginner in IoT) there are following options to achieve this:

\n\n\n", "Title": "Which tools/frameworks do you use to configure an array of SBCs", "Tags": "|linux|software|", "Answer": "

There are several ways that you can create a modified distro, and the best approach will depend on your environment and how you anticipate that evolving over time.

\n\n\n\n

In your case, it sounds like you probably want to take one board, perform the customisation, shut-down, and clone the uSD card as many times as necessary. If you have per-device customisation, maybe a 'run-once' script can handle the uniquification by an interaction with a server.

\n\n

Deciding which of these is best will depend on your production scale and how long you plan on doing this work. Does it need to scale to next year's distro, new hardware, new platforms? Will the payloads need to auto-update, and how will you cope with keeping the base OS patched?

\n\n

As an example of how a custom image can be built, you could look at this Raspian image generator: PiBakery or any others that google will offer.

\n" }, { "Id": "1775", "CreationDate": "2017-07-08T19:53:25.307", "Body": "

I am reviewing potential candidates for IoT position. The position in a industrial factory setting requires the candidate to contribute to the 4 stages of an IoT infrastructure as defined in the article: How to design an IoT ready-infrastructure:The 4-stage architecture

\n\n

\"4-stage

\n\n

Since IoT is a relatively new field, some of the candidates only talk about what they learned from building PCs. For example candidate A learned from building audio workstations:

\n\n
\n

Faster processors with more cache are preferable to higher core counts which can adversely affect system performance. Chipsets handles all aspects of the communications between system components: low latencies and high data throughput for hard disks and audio/DSP cards make chipset/motherboard choices paramount. Maximizing data throughput with higher HD rotational speeds or employing a separation-of-concerns HD partitioning strategy positively affect system performance; as does the HD cache size. RAM size is directly proportional to system performance. The audio card\u2019s driver-ASIO compatible-is a vital component to attain low latency. Graphics cards need to support OpenGL 2.0 or higher. 64-bit OS is preferred.

\n
\n\n

Essentially I am trying to derive IoT qualifications for this architecture from a PC building skill set.

\n\n

It seems obvious that stage 3 is where the candidates could have an impact \u2014given the quote \u2014 but are there opportunities in stages 1,2,4 where the candidate could contribute using solely the experiences listed above?

\n\n

Where would I draw direct lines from the experiential statements to the implied tasks in the 4-stage architecture?

\n", "Title": "Drawing parallels between building an IoT system and building a PC?", "Tags": "|hardware|system-architecture|software|", "Answer": "

Stage 1:

\n\n

PC side peripherals, specially input devices like mouse, scanner, keyboard are equivalents to sensors. Relevant is the correct pins, correct protocols and signal forms etc.

\n\n

Stage 2:

\n\n

Bus between internal parts of PC and between processor and outer devices equals data acquisition and networking.

\n\n

Stage 3:

\n\n

Edge processing is equal to sound card or video processor on screen. Same kind of externalization of tasks from the main CPU.

\n\n

Stage 4:

\n\n

CPU is the cloud of computer.

\n" }, { "Id": "1783", "CreationDate": "2017-06-29T14:04:07.987", "Body": "

So I am working on a project where I have torn away all the RC related parts of an older 1/10 scale Racing Buggy I had as a kid and replacing said parts with some Arduinos and a GPS to create a super rudimentary autonomous vehicle. I want to add in the Raspberry Pi Zero W as an on board base station for data logging and network control through a web app I'll design later on with my server.

\n\n

My concern is that short of getting some kind of data-box from Verizon or AT&T and paying a terribly large monthly bill on a contract I don't want, I'm not sure of any other cheap options.

\n\n

So what options do I have available to get the Pi on the cellular network that won't cost me an arm and a leg?

\n", "Title": "Connecting your Pi to the internet when mobile", "Tags": "|networking|", "Answer": "

I think that Electron by Particle may be something what you are looking for. Electron allows you to build device that can connect to 2G or 3G mobile wireless network.

\n

In one of the previous comments you mentioned that your data usage probably will not exceed a megabyte of data per month. With Electron you are charged monthly for the base rate which is $2.99 (includes first megabyte) and then $0.99 per any additional MB.

\n

From technical point of view, Electron is connected to Particle's cloud and exchanges messages with it. Then you can control Electron through your web-app by sending HTTP requests from your web-app to Particle's cloud. Electron has GPIO pins (also for Serial/UART communication) so depends on your needs you can connect it with your Raspberry Pi - for more information go here.

\n

In a general scenario communication between you and Electron should look like that:

\n
    \n
  1. write a function which handles command on Electron:
  2. \n
\n
int callRaspberry(String command) {\n    //handle communication here\n}\n
\n
    \n
  1. register previous function during setup:
  2. \n
\n
void setup()\n{\n   Particle.function("callRaspberry",callRaspberry);\n}\n
\n
    \n
  1. make a request to Particle's cloud, to call the function on Electron:
  2. \n
\n
curl https://api.particle.io/v1/devices/<DEVICE_ID>/callRaspberry \\\n  -d access_token=<YOUR_ACCESS_TOKEN> \\\n  -d arg=<COMMAND_VALUE>\n
\n

More code examples can be found here.

\n

I've also seen that Hologram provides similar devices to Electron and their service price is also cheaper. You can find some comparison here.

\n" }, { "Id": "1786", "CreationDate": "2017-07-10T17:54:36.887", "Body": "

The Ring Floodlight looked pretty nice, until FakeSpot warned that most of the positive reviews seem to be bogus, so I've decided to not pursue the commercial product and try to figure out how to get the same functionality otherwise.

\n\n

Is there a smart motion detection flood light that could IFTTT trigger a camera to start recording?

\n\n

Or would it be better to keep them separate? For example a night vision motion detecting camera doing its thing, and a separate dumb motion detecting flood light that just turns on when someone walks past?

\n\n

(Have just an Amazon Echo but don't think this helps me much).

\n\n

Ring Camera and Flood Light: https://amazon.com/Ring-Floodlight-Camera-Motion-Activated-Security/dp/B0722R3WV5

\n\n

FakeSpot Analysis: http://fakespot.com/product/ring-floodlight-camera-motion-activated-hd-security-cam-with-floodlights-two-way-talk-and-siren-alarm-black

\n", "Title": "Ring Floodlight - Simple alternative by combining camera and floodlight?", "Tags": "|amazon-echo|digital-cameras|lighting|surveillance-cameras|", "Answer": "

The presence of a commercial product which has reviews of varying quality should be a good sign that building something like this from scratch is a non-trivial undertaking. Doing IoT well is hard.

\n\n

Doing it yourself brings some advantages - you can tune the installation for a specific location, implement your own configuration strategy (most likely cli based), associate with just your home router, etc.

\n\n

There is no clear answer to which approach is better. Why are you doing this, what skills you you have already, what are you prepared to learn, is security important or can you rely on an ad-hoc implementation providing security through obscurity?

\n\n

Start by defining your requirements. Lighting, Motion detection, video capture. Look at the commercial product you rejected, and decide which aspects of it you need to improve on (this is subjective).

\n\n

List out some alternative implementations. This could be:

\n\n\n\n

Trade off the time/cost/performance of these options, and go with whichever you prefer. You might find that some of the more low-level approaches are harder but are better documented (since the off-the-shelf components are unlikely to be documented, and will rarely be designed with extensions in mind).

\n\n

Fundamentally, any working commercial product which satisfies your spec (even if it's got some flaws), will be far easier. Far more research is needed.

\n" }, { "Id": "1788", "CreationDate": "2017-07-11T10:24:21.623", "Body": "

My English pronunciation and accent is different from how American or British English is spoken. I suspect this is contributing to misses for the Google Home device.

\n\n

Is there a way that I can train the Google home device generally or specifically for the words that I know it has tendency to miss?

\n", "Title": "Is it possible to train the \"Google Home\" device for different styles of pronunciation?", "Tags": "|google-home|google-assistant|voice-recognition|smart-assistants|", "Answer": "

In 2024, Alexa asks for more than wake words.

\n\n

One of the ways to retrain it is

\n\n" }, { "Id": "1794", "CreationDate": "2017-07-12T12:49:07.070", "Body": "

There are two MQTT Brokers, the connection between them should enable traffic shaping. \nBroker A has multiple clients who publish data, Broker B has multiple subscriptions.

\n\n

Is there a possibility to enable traffic shaping on the connection to ensure that very publisher hast a minimum of granted bandwidth on the connection to broker B?

\n\n

This scenario is implemented using the Mosquitto MQTT broker with the broker-bridge feature to ensure every MQTT message will be send only once over the connection between broker A and B.

\n", "Title": "Traffic Shaping and MQTT", "Tags": "|mqtt|mosquitto|", "Answer": "

No, because there is no information about who published the message included in the message header, only the topic and any retained flags.

\n\n

The bridge between the 2 brokers is exactly the same sort of connection as between the a normal client and the broker, it looks to the remote broker just like any other client connection.

\n" }, { "Id": "1799", "CreationDate": "2017-07-13T15:26:08.500", "Body": "

I read an article recently from InternetOfBusiness.com which outlines how Bosch has been implementing remote controlling of their A/C systems. One thing, however, I fail to see in the article, and that is how the IoT aspect in particular helps save energy. The article says,

\n
\n

The engineers also set up a feedback panel at the canteen entrance to see if the IoT system was working; users could rate whether the temperature was too cold or hot for their liking.

\n

\u201cSince we wanted to create a better experience for diners and lower energy consumption, we needed to check they were happy with the result.\u201d He said.

\n

He added that results back from the system and the addition of ceiling fans meant that the company could increase the canteen thermostat from 24.5\u00b0C to around 26\u00b0C and the feedback from diners remained positive.

\n

\u201cThis temperature change almost halved the canteen\u2019s cooling demands. We estimate that this will not only save around 4,000 Singapore dollars annually but reduce carbon dioxide emissions by more than eight tonnes,\u201d said Staudacher.

\n
\n

How does the IoT make the air conditioning any more efficient? Is it just that the people in the building can adjust it themselves, making the temperature not get as cold, consequently saving on energy, or what?

\n", "Title": "How does the IoT reduce Bosch cooling costs?", "Tags": "|hvac|", "Answer": "

Quite a biased article about IoT and energy saving.

\n\n

From article:

\n\n
\n

the addition of ceiling fans meant that the company could increase the canteen thermostat from 24.5\u00b0C to around 26\u00b0C and the feedback from diners remained positive.

\n
\n\n

They knew they can set the target temperature of cooler machine to higher (warmer) value when they adjusted the system using user feedback and IoT system data and algorithms. Using less effort on pushing cool air they saved money. Instead they used more fans.

\n\n

Putting more fans, less cooling power and collecting feedback would have worked without IoT by wise guessing, but IoT proved its use on desicion making once again.

\n\n

Why did I say biased?

\n\n

By using IoT once a short period they came up an idea to save energy, nobody told adjusting temperature with IoT technology would save more than keeping fans all time on, at least nothing in that direction was said.

\n" }, { "Id": "1812", "CreationDate": "2017-07-17T14:49:06.780", "Body": "

I'd like to setup two temperature detectors: one outside and one inside my house, in order to compare temperature and act on the difference (opening/closing the window for example).

\n\n

The issue I'm facing is which platform to take. My initial thought was to go with the Photon, but the price is quite high when I can go for a Raspberry Pi Zero for way much.\nOn the other hand, the Pi Zero requires a lot of power, and since I plan to place one sensor outside, I was hoping to \"place it and forget it\" at least for a few months, with some AA batteries (one? two?) on it.

\n\n

So I'm asking for your help. I'm open to other platform to implement my plan. Here's what I'm looking for in this platform:

\n\n
    \n
  1. Wi-Fi capable (or some transmission for the outside remove, and Wi-Fi for the inside)
  2. \n
  3. Can live weeks, months using simple AAs
  4. \n
  5. Not expensive. It's just a small side project.
  6. \n
\n", "Title": "Which IoT platform should I use for low-energy temperature sensors to be powered by battery?", "Tags": "|raspberry-pi|", "Answer": "

I see at least 3 choices to make in your system design.

\n\n

RF Protocol WiFi is not very energy efficient. You can mitigate this by only sending readings infrequently (measure at 1 min, transmit at 20 min). BLE or similar might be better, but you need to trade range and parts cost if you opt for something a little less commodity. If its personal domestic use, much over 2 years is probably not worth much extra optimisation.

\n\n

Inside Unit You probably have different power constraints for this unit, but you don't specify. Critically, it doesn't need to be the same platform as the outdoor unit, but it doesn't sound like you need an SBC running linux here. How you want to develop your stack is maybe the driving factor here (as well as familiarity).

\n\n

Outdoor Unit Currently you have a low feature requirement - just a digital interface to your thermometer. You might want a resolution of 0.25 \u00b0C or better to allow some scope in how you use it. Certainly you need a sleep mode, but the choice is quite wide. Development environment and ease of use might be as important a factor as price. You have no compute payload worth worrying about over the communication protocol, so a low clock frequency makes sense.

\n\n

Other factors you might want to consider are how likely you are to expand this in the future, this might affect your choice of board (for example if you want to add a display/control unit indoors).

\n\n

It's fairly clear that the only important choice here is that the outdoor/battery unit should be a micro-controller with sleep (and RF), rather than a full linux platform. Newer platforms are likely to offer better energy efficiency, but might be sufficiently expensive as to offset the benefit in this use case.

\n" }, { "Id": "1819", "CreationDate": "2017-07-19T01:05:34.097", "Body": "

I understand we usually use relay to do controlled switch, but I'm wondering if the Bluetooth protocol could work on high voltage I mean 110-220v.\nSo is it theoretically possible to design a Bluetooth chipset running directly on such voltage?\nIf yes what are the cons not to do it ? From my little knowledge, I guess it's mainly consumption problems.

\n\n

Hope I'm clear enough; thanks for any input.

\n", "Title": "Bluetooth chipset high voltage", "Tags": "|smart-home|bluetooth|", "Answer": "

AFAIK you cannot do that. Thinking of Bluetooth as a quite complex protocol of approximately 1k pages of documentation the power electronics to build such circuit would be overly sized.

\n\n

Think of your PC, it inputs 110 or 220 and still there is the power unit that lowers the voltage to low DC as in every micro circuit you see nowadays.

\n" }, { "Id": "1828", "CreationDate": "2017-07-19T22:06:42.440", "Body": "

I want to know if it works in countries where it's not sold. It's the same Internet, so it should work. Right?

\n", "Title": "Can I use Google Home in countries where it is not officially available?", "Tags": "|google-home|", "Answer": "

Google Home can be set Up and used in other countries. This Beebom article provides a step-by-step guide of doing so.

\n\n

During setup you'll see popup:

\n\n
\n

You may see a warning telling you that the Google Home was manufactured for a different country, and may not work with your WiFi network.

\n
\n\n

\"May not work with your WiFi network\" could happen if your 2.4 GHz WiFi router is set to use one of the channels beyond channel-11, so that is one aspect.

\n\n

Now coming to services, users have reported on forums that music services like Pandora, Youtube Red and Spotify do not work for them. But, that is due to these services not being geographically supported in all countries.

\n\n

I guess maximum number of services may be blocked in China due to government policies. Even searches may be affected.

\n" }, { "Id": "1858", "CreationDate": "2017-07-26T19:33:26.967", "Body": "

I answered to the below linked question and started to think, what are the hardware specs for running an IoT stack of for example MQTT over LoraWan? Would Raspberry PI 3 survive or not?

\n\n

I do not care how RPI connects LoraWan, mainly about RAM and storage use.

\n\n

How to select simple light weight IoT server for development?

\n", "Title": "Would RPI 3 serve as IoT server with MQTT?", "Tags": "|mqtt|hardware|raspberry-pi|lorawan|", "Answer": "

A Raspberry Pi 3 is a pretty serious bit of kit when you think about it

\n\n\n\n

That is a huge amount of memory, easily more than enough to run a MQTT broker and something like Node-RED to interface between a LoRa radio and the broker.

\n\n

We have a commercial gateway (MultiTech MultiConnect Conduit) in the office which is a very similarly spec'd bit of kit and if you google loRawan gateway most of the first page is all about how to build one with a pi.

\n" }, { "Id": "1874", "CreationDate": "2017-07-29T05:44:21.993", "Body": "

I could not find any relevant information on this, sites just recommended class 10. I want to know if it is okay to use Class 4, 8 GB SD card, for making a bootable Linux drive for Intel Galileo, or even installing Windows. Are there any consequences of doing so?

\n", "Title": "What will happen if I use Class 4 SD card, instead of recommended Class 10 for Intel Galileo?", "Tags": "|microcontrollers|linux|microsoft-windows-iot|intel-galileo|", "Answer": "

There are 4 standard SDA ratings that I'm aware of: 2, 4, 6, and 10. Basically, the number corresponds to the minimum write speed in MB/s that the card is capable. So for instance, a Class 4 would have a minimum write speed of 4 MB/s, whereas a Class 10 would have a minimum write speed of 10 MB/s.

\n\n

As Helmar mentioned, this means that the Class 4 will be slower. If you're running well under the 4 MB/s limit, however, as Helmar said, you could be better with the Class 4. That being said, if you're running close to the 4 MB/s limit, one problem I have had when videotaping to SD cards is that the card will start heating up, which can cause damage to the card. Frequently, upgrading the card will resolve that issue.

\n" }, { "Id": "1891", "CreationDate": "2017-08-06T11:55:17.473", "Body": "

I need to develop a solution for determining whether an object/tag exists within a defined perimeter. The perimeter would be located indoors so GPS isn't an option. It would be an approx 10 metres x 10 metres rectangle. The 'tag' needs to fit in a person's pocket (however it could be battery powered), and I would need to know:

\n\n
    \n
  1. When the tag leaves the perimeter (doesn't have to be exact and I don't care where it is located within the perimeter)
  2. \n
  3. Identify the tag left the perimeter (as there will be 10-20 objects in the perimeter that need to be tracked)
  4. \n
\n\n

Would an active RFID system be the way forward? I'm thinking that I'd need 3-4 receivers spaced around the perimeter so that I can combine the readings from them to estimate the position.

\n\n

Really appreciate any advice or suggestions.

\n", "Title": "Wireless communication technologies for indoor location-tracking", "Tags": "|wireless|rfid|tracking-devices|", "Answer": "

IF you would like to located indooer, rfid and bluetooth would be best choice. It has UHF and HF for rfid requency, respectively 860-960MHz and 13.56Mhz. And UHF divided into 902-928 MHz(USA) 865-868 MHz(EU). The passive rfid is better option than active if space not too big. I would recommend the SEIKO RFID for you, you will find more required information from the site.

\n" }, { "Id": "1897", "CreationDate": "2017-08-07T15:00:18.663", "Body": "

I have the following hardware:

\n
    \n
  1. 3 x Particle Photons. Each serve as an HTTP Server

    \n
  2. \n
  3. 1 x Raspberry Pi 3 which will serve as an HTTP Client

    \n
  4. \n
\n

Upon requesting an HTTP GET to any of the the Photons, the API returns:

\n
{\n  node: 1,\n  uptime: 1234556,\n  location: 'back',\n  sensor: {\n      Eu: {// Euler Angles from IMU\n           h: 0, p: 0, r: 0\n      },\n      La: {// linear Acceleration values from IMU\n           x: 0, y: 0, z: 0\n      }\n  }\n}\n
\n

I want to create a Polling scheme where the Raspberry Pi client performs an HTTP GET every 0.1 Second on each of the the 3 Servers.

\n

I am not sure whether there is something like HTTP Polling and whether Asynchronous Libraries like Twisted by Python should be the one to be used.

\n

I would like to gain some advice on how will a Multiple Server - Single Client model would function w.r.t. HTTP?

\n

Reference

\n

Each Particle Photon has the above mentioned JSON response to a HTTP GET Request.

\n

The Raspberry Pi would serve as an HTTP Client, trying to get requests from each and every Particle Photons.\n\"component

\n", "Title": "Creating an HTTP GET polling service on a Raspberry Pi Client", "Tags": "|networking|wireless|software|", "Answer": "

Maybe the following links can help you:

\n\n

Basic client example:\nhttps://docs.python.org/2/library/asyncore.html#asyncore-example-basic-http-client

\n\n

Basic echo server example:\nhttps://docs.python.org/2/library/asyncore.html#asyncore-example-basic-echo-server

\n\n

Also, have you thought about using UDP protocol? it may be better...

\n\n

And I would advice about HTTP/1.0, as far as I know, is not mandatory in its implementation, to keep connections alive, that was defined in HTTP/1.1; anyway it depends on the implementation, it can have or it can't.

\n\n
\n\n
import asyncore, socket\n\nclass HTTPClient(asyncore.dispatcher):\n\n    def __init__(self, host, path):\n        asyncore.dispatcher.__init__(self)\n        self.create_socket(socket.AF_INET, socket.SOCK_STREAM)\n        self.connect( (host, 80) )\n        self.buffer = 'GET %s HTTP/1.0\\r\\n\\r\\n' % path\n\n    def handle_connect(self):\n        pass\n\n    def handle_close(self):\n        self.close()\n\n    def handle_read(self):\n        print self.recv(8192)\n\n    def writable(self):\n        return (len(self.buffer) > 0)\n\n    def handle_write(self):\n        sent = self.send(self.buffer)\n        self.buffer = self.buffer[sent:]\n\n\nclient = HTTPClient('www.python.org', '/')\nasyncore.loop()\n
\n\n
\n\n
import asyncore\nimport socket\n\nclass EchoHandler(asyncore.dispatcher_with_send):\n\n    def handle_read(self):\n        data = self.recv(8192)\n        if data:\n            self.send(data)\n\nclass EchoServer(asyncore.dispatcher):\n\n    def __init__(self, host, port):\n        asyncore.dispatcher.__init__(self)\n        self.create_socket(socket.AF_INET, socket.SOCK_STREAM)\n        self.set_reuse_addr()\n        self.bind((host, port))\n        self.listen(5)\n\n    def handle_accept(self):\n        pair = self.accept()\n        if pair is not None:\n            sock, addr = pair\n            print 'Incoming connection from %s' % repr(addr)\n            handler = EchoHandler(sock)\n\nserver = EchoServer('localhost', 8080)\nasyncore.loop()\n
\n" }, { "Id": "1901", "CreationDate": "2017-08-07T23:15:51.360", "Body": "

We are in the early stages of planning an IoT project.

\n\n

One issue we are struggling with is how our Internet based server can access each unit of our IoT project and deploy code updates, messages ... etc.

\n\n

I'm concerned about this because, of course, each IoT unit is within it's own WiFi network which is closed by design.

\n\n

How does our server, an essential part of our setup, call all it's 'children' within their respective closed networks?

\n", "Title": "Remote access for multiple IoT project units", "Tags": "|remote-access|", "Answer": "

It sounds like you should be looking for a complete IoT device management platform - there are too many complicated aspects of scalability, security, provisioning and firmware update for this to be a sensible thing to try and develop in house from scratch. Make sure you pick a platform which uses open standards.

\n\n

To answer your question more directly, each endpoint generally opens a TLS secured connection to a cloud server (using something like CoAP, LWM2M or MQTT depending on the purpose of the connection), so connections are almost always initiated from the endpoint. Only with IPv6 or particularly specicfic use cases are you likely to have the cloud initiating the connection without any assistance from the endpoint.

\n" }, { "Id": "1908", "CreationDate": "2017-08-09T18:54:31.590", "Body": "

In the CoAP specification, it is implied that IEEE 802.15.4 can be used in conjunction with CoAP. Is this a requirement or can CoAP also be used with other OSI layer 1, 2 protocols such as IEEE 802.11, BLE or LTE/5G/etc?

\n", "Title": "Does CoAP depend on IEEE 802.15.4?", "Tags": "|networking|wifi|protocols|bluetooth|coap|", "Answer": "

No, CoAP is an application layer protocol it's not dependent

\n\n

Basically that's the beauty behind the OSI layers. If correctly implemented you can mostly stack them however you want. As with every thing that starts with if correctly implemented that's mostly academic and some protocols fit better together with others than others do. More or less the only restriction is to be able to transfer the data of an upper layer with the lower level protocol.

\n\n

In the case of CoAP it runs great on UDP which is kind of the intended protocol on the next lower OSI level, the transport level.

\n\n
\n

Instead of a complex transport stack, it gets by with UDP on IP. \n \u2014 CoAP Website

\n
\n\n

From our daily Wi-Fi / smart phone experience we all know that IP runs great on 802.11 & LTE/5G.

\n\n

Bluetooth and it's low energy variant however are actually protocol stacks that go up to the presentation layer. I'm not sure how good the match of CoAP is there directly. It may be easy, but I just don't now.

\n\n

However with Bluetooth 4.2 they included the IPSP. Basically allowing you to tunnel IPv6 over Bluetooth enabling you to use the standard internet protocol suite from thereon upwards.

\n\n
\n

The Internet Protocol Support Profile (IPSP) allows devices to discover and communicate to\n other devices that support IPSP. The communication between the devices that support IPSP is\n done using IPv6 packets over the Bluetooth Low Energy transport.\n \u2014 Bluetooth 4.2 Specification

\n
\n" }, { "Id": "1910", "CreationDate": "2017-08-11T01:03:21.387", "Body": "

I want to install a light switch at my church that has a remote control. Similar to the Lutron Caseta. The only requirement is that it has a remote control. Prefer not to have a dimmer. I would like to have as few accessories as possible. Are there any solutions that have the switch, the remote and nothing else (i.e. a hub etc)?

\n\n

I was thinking about Z-Wave or Wi-Fi, but it doesn't matter, as long as it's reliable and easy to use.

\n", "Title": "What is the easiest to install smart switch that has a remote control?", "Tags": "|lighting|", "Answer": "

EDIT: Since having a physical remote seems to be a must, you can check the Sonoff RF that uses a remote to turn the lights on and off. But, RF are low security and does poor filtering of bad signals, so, it might be attacked. I'll go with wifi if possibe.

\n\n

First of all, I don't have any partnership with this brand, just sharing my experience as a maker and electronics enthusiastic.

\n\n

I have a couple of sonoff s20 at my place. It cost me about 12\u20ac a month ago (buying in China), but you can find it up to 20$.

\n\n

It came with an app that works pretty well. You can set up scenes and it haves ifttt integration. The phone (android or ios) doesn't need to be on the same network as the sonoff to work.

\n\n

Also, the sonoff s20 it's easy to hack and it's possible to replace the firmware with a custom one.

\n\n

It's a nice way to get started with home automation.

\n" }, { "Id": "1918", "CreationDate": "2017-08-15T21:33:02.520", "Body": "

After 6LowPAN stack. Every node can have the different internet IPv6 address.\nSo,

\n\n
    \n
  1. Is 6LOWPAN stack is present in GATEWAY or sensor mote as well?

  2. \n
  3. In a case of sensor mote, is a GATEWAY required? Can't 6LOWPAN enable sensor to directly connect to IPv6 Router?

  4. \n
\n\n

Correct me if I am missing some fact or data.

\n", "Title": "Does 6LowPAN stack helps in requirement of gateway?", "Tags": "|networking|6lowpan|", "Answer": "

In any case, a 6LoWPAN enabled Sensor Network needs a Gateway to interact with an end user who might be using a standard 802.11 WLAN or a 802.3 Ethernet based network.

\n\n

The Gateway needs to translate the 6LoWPAN enabled IPv6 addresses into addresses that standard networks can understand and communicate. For instance, if a node with 6LoWPAN enabled on it with Header Compression has Address ::1 there needs to be some defined way to know how to access this sensor node over a connected network. For that you need a gateway which does translation for you.

\n\n

Have a look into 6lbr by CETIC for some more implementation specific example.

\n\n

Read this Awesome IPv6-WSN-Book for better understanding of 6LoWPAN

\n" }, { "Id": "1925", "CreationDate": "2017-08-18T12:48:16.673", "Body": "

I am aware that you can do Alexa calling: we've even had numerous questions about it. Is it possible to do something similar with the Google Home? If so, how would it be done?

\n\n

Particularly, I want to know if it is possible to call a friend who has another Google Home and if so, how it would be done.

\n", "Title": "How do you do a Google Home call?", "Tags": "|google-home|", "Answer": "

Answer to your main question:

\n\n
\n

I want to know if it is possible to call a friend who has another Google Home

\n
\n\n

is no, you cannot do that exactly. In the link by @Bence Kaulics in comments it is said you can call outbound (to a phone) but inbound calls are not possible. That means the destination cannot be another Google Home but only owners phone.

\n\n

Making an outbound call is as easy as saying Hello Google, call NN, where NN is one of your contact or a phone number.

\n\n

At the moment though 911 calls aren't possible. Another issue is that for normal user your number is not shown to the responding phone, which may feel embarrassing.

\n" }, { "Id": "1937", "CreationDate": "2017-08-22T20:32:03.120", "Body": "

I have Amazon Prime music. Is it possible to ask Alexa to play music similar to or like a particular artist? When I ask her to play music by [artist name], they are only songs by that artist (obviously).

\n\n

I'm looking for something similar to the same thing on Pandora or Spotify, but for Amazon Prime music (and not from my own uploaded songs). Is there a Skill available that provides this?

\n", "Title": "How to get Alexa to play music like using Amazon Music", "Tags": "|alexa|amazon-echo|", "Answer": "

To get Alexa to play music similar to an artist say:

\n
"Alexa, play songs similar to <artist name>"\n
\n

In addition, you can get Alexa to play music that is trending in a particular city. For example:

\n
"Alexa, play top songs in Tokyo."\n
\n" }, { "Id": "1949", "CreationDate": "2017-08-27T15:17:48.150", "Body": "

I have a standard Android smartphone connected to 4G (yes it can connect to my house's WIFI but the point of this question is to work from outside).

\n\n

I would like to trigger a task on demand for an external source (website click on my server, mail, iCal event, opened door) onto a Tasker task.

\n\n
\n

Sidenote : Tasker works with a profile / task system : a profile is when an event occurs (date, phone event,...), then several tasks can be bound to this profile as start and end tasks.

\n
\n\n

The particular test case is : \"When I click on 'I'm on vacation' on my server's webpage, it should send something to my phone to switch a 'Vacation' variable on tasker\" (yes my phone is nearby but I am THAT lazy ;))

\n\n

I known that there are already some alternatives like

\n\n\n\n

I would like to know if there is a \"simple\" solution, preferably without a 3rd party and I would like to keep the task bound to a simple profile, not multi-profile like \"hey a mail\",\"hey a sms\",...

\n\n

Reminder :

\n\n\n", "Title": "Input signal for Tasker on an external network", "Tags": "|android|tasker|", "Answer": "

Pushbullet has a Tasker plugin embedded with the android application which allows to filter incoming messages and (regex) parse text within.

\n\n

A program can for example

\n\n\n" }, { "Id": "1953", "CreationDate": "2017-08-28T18:49:35.910", "Body": "

I want to connect my CNC machine with AWS IoT within a system that contains a robotic arm and sensor. The CNC machine is connected to a laptop and runs with Python code. I want to use MQTT to make the CNC machine a publisher and subscriber but I don't know how to do that. Here is the CNC Python code.

\n\n
import serial\nimport time\n\n# Open grbl serial port\ns = serial.Serial('COM3',9600)\n\n# Open g-code file\nf = open('grbl.gcode.txt','r');\n\n# Wake up grbl\ns.write(\"\\r\\n\\r\\n\")\ntime.sleep(2)   # Wait for grbl to initialize \ns.flushInput()  # Flush startup text in serial input\n\n# Stream g-code to grbl\nfor line in f:\n    l = line.strip() # Strip all EOL characters for consistency\n    print 'Sending: ' + l,\n    s.write(l + '\\n') # Send g-code block to grbl\n    grbl_out = s.readline() # Wait for grbl response with carriage return\n    print ' : ' + grbl_out.strip()\n\n# Wait here until grbl is finished to close serial port and file.\nraw_input(\"  Press <Enter> to exit and disable grbl.\") \n\n# Close file and serial port\nf.close()\ns.close()  \n
\n", "Title": "Connect CNC machine to AWS IoT", "Tags": "|mqtt|aws-iot|", "Answer": "

By far the easiest method would be to use a library such as paho-mqtt or the AWS IoT SDK for Python (see the bottom of this post), which are MQTT client libraries for Python.

\n\n

Usage guidance is provided in the linked documentation. Here is an example, for reference, which you may find useful to adapt to your purposes (it's loosely based on the documentation provided):

\n\n
import paho.mqtt.client as mqtt\nimport time\n\n# Runs when the client receives a CONNACK connection acknowledgement.\ndef on_connect(client, userdata, flags, result_code):\n    print \"Successful connection.\"\n    # Subscribe to your topics here.\n    client.subscribe(\"sensor/topic_you_want_to_subscribe_to\")\n\n# Runs when a message is PUBLISHed from the broker. Any messages you receive\n# will run this callback.\ndef on_message(client, userdata, message):\n    if message.topic == \"sensor/topic_you_want_to_subscribe_to\":\n        if message.payload == \"START\":\n            # We've received a message \"START\" from the sensor.\n            # You could do something here if you wanted to.\n        elif message.payload == \"STOP\":\n            # Received \"STOP\". Do the corresponding thing here.\n\nclient = mqtt.Client()\nclient.on_connect = on_connect\nclient.on_message = on_message\n\nclient.connect(\"yourbrokerurl.com\", 1883, 60)\nclient.loop_start()\n\nwhile True:\n    # Your script must loop here, and CAN block the thread.\n    # For example, you could take a measurement, then sleep 5 seconds\n    # and publish it.\n    data_to_broadcast = get_data()\n    client.publish(\"cnc/data\", data_to_broadcast)\n    time.sleep(5)\n
\n\n

This example looks a little overwhelming... But it's quite easy when you take it in bits. Start by looking at the line client = mqtt.Client(). Here we create an MQTT client. We then register callbacks for connecting and receiving a message (these are just functions that run when an event occurs).

\n\n

In our connect callback, we subscribe to the sensor topic. Then, in the message received function, we check if the message is telling our CNC to do something, then do it (the implementation of that is left to the reader).

\n\n

Then, in the while True: loop, we publish data every 5 seconds. That's just an example\u2014you can do whatever you want there. Just remember client.publish is the function you need to publish something.

\n\n
\n\n

Alternatively, if you're looking for a library specifically for AWS IoT, you can use aws-iot-device-sdk-python. This builds on Paho, but makes the extra requirements and features (such as the Device Shadow) more easily accessible from the library.

\n\n

Remember also that AWS IoT requires a device certificate for each device\u2014their SDK will help you handle this, but Paho alone won't.

\n\n

Here is a basic pub/sub sample for Amazon's SDK. You'll notice it's broadly similar to the example above, and might be quicker to start from.

\n" }, { "Id": "1963", "CreationDate": "2017-08-30T12:35:01.387", "Body": "

I am trying to build a cheap asset tracker that can be powered by a battery pack. All I need the IoT device to do, is to connect to known WiFi network access points. I have access to the backend system that manages the WiFi access points.

\n\n

I considered a CHIP computer or PiZero W but both have processing power that I don't need. Looking for a complete board with Wi-Fi.

\n", "Title": "What is the simplest programmable IoT device that can connect to Wi-Fi?", "Tags": "|microcontrollers|wifi|power-consumption|", "Answer": "

Blindly you can choose Node MCU , especially ESP8266 , that\u2019s one of the easiest wifi module that we can connect and that\u2019s the cheapest in market,hardly you can get it for 400 Rupees and the good thing is ESP 8266 having full TCP/IP stack and required power is 3.3V ,it has 16 GPIO pins so that you can connect multiple sensors with it, don\u2019t forget to give external power source if you add more number of I/O devices

\n" }, { "Id": "2002", "CreationDate": "2017-09-04T11:56:46.197", "Body": "

I'm trying to figure out a clever way to set a Google Home timer and perform another action in one command.

\n\n

I currently have a shortcut called \"tea time\" which sets a timer for my tea. I would really like to have it also trigger an IFTTT (or something) to make a light dim with it as a visual representation of the timer. I also have a smart things hub integrated. In the case of my light script, that's where the script actually runs.

\n\n

Does anybody have any thoughts on how I could make that work so that I can run the timer and the light script?

\n", "Title": "Google Assistant Timer + Action with One Command", "Tags": "|google-home|samsung-smartthings|google-assistant|philips-hue|ifttt|", "Answer": "

Google Assistant recently added the ability to execute multiple commands in a shortcut, which solves this problem for me. I was able change my \"tea time\" shortcut to look like:

\n\n
\n

\"Set a 4 minute timer called tea and turn on the 4 minute timer light.\"

\n
\n" }, { "Id": "2021", "CreationDate": "2017-09-07T17:59:38.977", "Body": "

I'd like my MQTT broker to be accessible from outside my home network, but I'm a bit reluctant to open a port in the firewall. And I'd like to avoid using my home IP.

\n\n

It's pretty convenient to have an unencrypted open broker at home, but that doesn't work if I am going to expose it. What other options do I have?

\n", "Title": "MQTT broker accessible from outside without opening port in firewall?", "Tags": "|mqtt|", "Answer": "

@hardillb gave a good answer but let me try to add a few details adding some "real-life" touch:

\n
    \n
  1. Choose some MQTT broker available to the public. HiveMQ can be a good example and you can start with the try-out page describing how to connect to the broker:
  2. \n
\n
\n

Connect to Public Broker

\n

Host: broker.hivemq.com

\n

Port: 1883

\n

Websocket Port: 8000

\n
\n
    \n
  1. Choose which client best fits to you and use it for internal broker interconnection with the public MQTT broker. For example your C client could be Paho MQTT. The client has support for SSL/TLS so your security remains on a high level.

    \n
  2. \n
  3. Paho MQTT embedded can be your choice for external devices.

    \n
  4. \n
  5. HiveMQ has a pay-as-you-go licencing policy so you can consider it with care. Anyway you can check out this page for a list of cloud available and testing available MQTT brokers.

    \n
  6. \n
\n" }, { "Id": "2040", "CreationDate": "2017-09-11T17:57:39.660", "Body": "

I have a Google Home. As a joke, I would like to program mine to answer any requests to play music with something like \"Sorry, I can only play music by Taylor Swift\" or some artist or playlist of my choice. I have Spotify premium as my music service.

\n\n

I looked briefly at the Google Home API, and it looks like you can program new actions. Is there some way I can intercept requests for music and filter them with this message?

\n", "Title": "As a software developer, is there a way I can restrict Google Home to only play certain music?", "Tags": "|google-home|google-assistant|", "Answer": "

Unfortunately, any Action that is intended to imitate Google Home system functionality is explicitly banned, and the API doesn't really facilitate doing that (not surprising, I suppose, that the API doesn't let you do something that Google doesn't want you to do!):

\n
\n

We don't allow actions that mimic or interfere with device or Assistant functionality. Examples of prohibited behaviour include:

\n\n
\n

(Actions on Google: Policies for Actions on Google)

\n

There is a fixed invocation pattern that Actions use to be invoked, generally. The image in their documentation explains most clearly:

\n

\"Invocation

\n

Image from Actions on Google documentation; CC BY 3.0.

\n

You can also use action phrases, such as ""hear a fun fact"\nor "give me a 5 minute workout", but undoubtedly the Google Action takes precedence to yours.

\n

As an aside, it appears you can override Alexa Skills in the way you're hoping.

\n" }, { "Id": "2046", "CreationDate": "2017-09-12T09:03:42.863", "Body": "

I have a TV as secondary PC monitor and console output, and I would like to build something to control it from the PC sources\u2014turning on and off, volume and such. In short, a program to emulate my TV remote control.

\n\n

I have no idea where to start, any suggestion?

\n", "Title": "How can I control my TV from my computer?", "Tags": "|smart-home|software|microsoft-windows|", "Answer": "

There are generally 2 ways to control TVs:

\n\n
    \n
  1. IR
  2. \n
  3. RS-232
  4. \n
\n\n

Newer TVs might have Ethernet or Wi-Fi connections available, and also some level of support for CEC. You might be able to control them over the network with a manufacturer-specific app, but probably not via a documented protocol. CEC control was very spotty when it came out. I'm not sure if it's gotten more robust in the past couple of years.

\n\n

There are DIY and commercial options for IR and RS-232. The internet has no shortage of DIY guides for Arduinos and Raspberry PIs over IR.

\n\n

In the Professional AV field, RS-232 control is what's most often used. Basically, you open a telnet session on a com port, spew some commands, and the TV does stuff. The protocol documents for the 232 ports are usually available on the manufacturers website, or as part of a users manual, and there's usually a section for the command protocol. You may need a physical adapter for the RS-232 port. For example, 3.5mm to DB9, or RS-232c to DB9. Keep in mind that RS-232 and network based control gives your program feedback, while IR does not.

\n\n

The OP settled on CEC, but anyone else will need to start with the specific make/model of the TV before you can attempt to figure out what its control options are (IR, 232, network, etc). Commercial TVs always have 232 ports, while residential TVs might or might not.

\n\n

If you want to futz with ProAV stuff, you can usually get it online for pretty cheap. However, the software is harder to come by. Most often, the manufacturer won't give you their free software unless you're in a business relationship with them. Brands to look for: Savant, Crestron, Extron, Control4, or a company that has a booth at Infocomm (AV tradeshow).

\n\n

I don't feel like I need a disclaimer, but I do work in Professional AV. Not at any of the companies listed.

\n" }, { "Id": "2053", "CreationDate": "2017-09-12T19:33:54.020", "Body": "

I am using a NodeMCU board with WiFi capabilities to build a simple asset tracker. I have managed to find a few Arduino sketches that enables connectivity to Azure IoT Hub and post messages.

\n\n

One of the keys I need to \"load\" onto the board is the Azure Device Connection string and of course a WiFi SSID and password.

\n\n

My fear is someone might simply take the board and \"download\" the files to get access to the security credentials.

\n\n

Is my fear unwarranted or is the loss of credentials a real threat I need to mitigate?

\n", "Title": "Can programs loaded onto NodeMCU board be extracted?", "Tags": "|security|", "Answer": "

[disclaimer: I'm a security / crypto professional and deal with security architecture questions like this every day.]

\n\n

You have stumbled onto the problem of storing credentials in such a way that an unattended process can access them, but an attacker cannot. This is a well known and very difficult problem to solve.

\n\n

If your IoT device has a hardware keystore built-in to the motherboard, like some TPMs, or the equivalent to the Android Hardware-backed Keystore or Apple Secure Enclave, then you can use that.

\n\n

With traditional servers you can use HSMs or Smart Cards, but the only full software solution that I'm aware of is to derive an AES key from some sort of \"hardware fingerprint\" built by combining serial numbers of all the hardware devices. Then use that AES key to encrypt the credentials. A process running on the same server can reconstruct the AES key and decrypt the credentials, but once you extract the file from the server, it's essentially un-decryptable.

\n\n

IoT throws a wrench into that for two reasons:

\n\n
    \n
  1. The assumption that hardware serial numbers are unique probably does not hold, and

  2. \n
  3. Unlike servers, attackers have physical access to the device, therefore can probably get a shell on the device to run the decryption program.

  4. \n
\n\n

Both hardware encryption (TPMs) and \"hardware fingerprint\" encryption are obfuscation at best because, fundamentally, if a local process can decrypt the data, then an attacker able to run that local process can also decrypt it.

\n\n
\n\n

So the standard trick looks like it doesn't work here. The first question you need to need to ask yourself is:

\n\n\n\n

Ultimately, I think you either need to decide that security > convenience and have a human enter the credentials after each boot-up (using something like @BenceKaulics's answer), or you decide that security < convenience and just put the credentials on the device, maybe using some obfuscation if you feel that makes a difference.

\n\n
\n\n

This is a hard problem made harder by the nature of IoT devices.

\n\n

For completeness, the full-blown industrial solution to this problem is:

\n\n\n\n

This way, and attacker who compromises a device can get a session opened, but never has direct access to the credentials.

\n" }, { "Id": "2068", "CreationDate": "2017-09-13T21:07:20.433", "Body": "

Reading about Zigbee, I see it described as a technology for creating personal area networks. I've also been reading about 6lowPAN, which seems to crop up in mesh settings, cementing the idea, at least in my mind, that mesh is limited to PAN applications.

\n\n

Is there something about mesh networking that makes it inherently limiting in terms of network size?

\n\n

Since I already have some home automation gadgets using Zigbee, I already know that a Zigbee network is good at least for an apartment sized wireless network with ten to fifteen nodes.

\n\n

If I extended my Zigbee network to, say, provide smart lighting for an apartment block of 100 apartments, would I start to feel some limitations?

\n", "Title": "Are wireless mesh networks limited to PAN applications?", "Tags": "|zigbee|mesh-networks|6lowpan|", "Answer": "

If you consider that the internet is a mesh network of sorts, you should see your answer in the broadest terms.

\n\n

Asking is a specific mesh network has any scalability issues is slightly different. There is plenty of scope to architect a modified network protocol to address a specific implementation, with the right sort of optimisations.

\n\n

At some point, you might also want to consider if the mesh approach is best, or maybe a hybrid approach has some value.

\n\n

Factors to consider are:

\n\n\n\n

It feels that the example you describe should scale to an apartment block. Any one second shouldn't see more than a handful of data transactions. It might scale to a few blocks, but not a street or city. Proving an architecture at the scale of millions of devices or transactions is hard.

\n" }, { "Id": "2069", "CreationDate": "2017-09-13T22:48:25.923", "Body": "

I've been thinking about what it would take to build a temperature sensor network for the block of apartments I live in. A wireless mesh, if it would work at all, would have some nice features. In particular, I could place sensors in the garage and cellar storage areas where no mobile or Wi-Fi signals would otherwise reach.

\n\n

A possible reason not to use mesh is that I would probably also want to use sleepy end devices to avoid changing batteries often. From what I can see, the only way this is going to work is with clock synchronisation so that the devices wake up at the same time and long enough for the signal to propagate through the network.

\n\n

While I've heard such a solution described, I wonder how well that would work out in practice. Presumably periodic clock synchronisation needs to be added to the protocol to avoid drift. Does anyone have experience with this, and are there other strategies for using mesh and sleepy devices together apart from clock sync?

\n", "Title": "Is wireless mesh a poor choice for sleepy devices?", "Tags": "|zigbee|power-consumption|mesh-networks|", "Answer": "

It is actually possible to allow for sleeping nodes without the need for time synchronization. The basic idea is to send a message multiple times until the node finally wakes up. There is of course a lot of room for clever optimization, so there are hundreds of MAC layer approaches based on this idea.

\n\n

But since your question specifically asks for MAC layers, where a node knows when to transmit in advance, i.e. Time Division Multiple Access (TDMA), I will focus on those approaches.

\n\n

As you already mentioned, one problem is clock drift, so the devices have to wake up regularly for time synchronization. In the typical short-range wireless applications we are talking about, signal propagation duration itself over a single hop is not a big problem. So it is sufficient that a central coordinator sends a beacon, including the current time, in regular time intervals that are known to the nodes.

\n\n

In a multi-hop network it gets more complicated. Just forwarding the beacon will not work, because the latency is too high. The solution is that multiple (if not all) nodes send beacons, i.e. receive a beacon from a node closer to the coordinator, correct the own clock drift with it and send out an own beacon with the corrected time. You just have to avoid building circles (been there, done that....).

\n\n

Since now every node in the network has the same notion of time, there is a second problem: How does a node know when he should wake up to transmit or receive? There are basically four approaches, that can also be combined:

\n\n\n\n

There are two related techniques included in the IEEE 802.15.4 standard that currently find much research attention: TSCH and DSME.

\n\n

TSCH itself is quite basic. It only solves the time synchronization problem, but leaves the slot assignment problem for an upper layer. There is 6TiSCH that tries to fill this gap, but it is still work-in-progress. There are implementations, for example included in Contiki or OpenWSN.

\n\n

DSME on the other hand already provides a mechanism for decentralized slot negotiation. We have actually build an open-source implementation of this called openDSME. While there is a video tutorial for running a simulation, the hardware implementation is unfortunately still underdocumented. Ask another question or contact us directly if you want to use it.

\n" }, { "Id": "2071", "CreationDate": "2017-09-13T23:31:23.877", "Body": "

I'm on connecting my Raspberry Pi 3B board with X10 EagleEye Motion Sensor using X-10 CM11A ActiveHome Serial interface.

\n\n

Can anybody share a library(C/C++, Java, Python) or open source tool used to control X10 power line sensors with X-10 CM11A and Raspberry Pi?

\n", "Title": "Sensor using X10 power line protocol connection to Raspberry Pi", "Tags": "|raspberry-pi|mesh-networks|", "Answer": "

I had an Ubuntu system with X10, CM11A, and heyu for several years and switched it all over to a Raspberry Pi about 4 years ago.\nI then replicated it twice for my parents, and a sibling.

\n\n

It can be a pain to get it all working, but once it does, it's great.

\n\n

I have Python scripts that follow the heyu log files to generate alarms.\nOther scripts use heyu for control.

\n" }, { "Id": "2075", "CreationDate": "2017-09-14T15:56:18.137", "Body": "

I recently built a XBee network (using Firmware version 9000 of Digimesh 2.4 TH function set of XB24c family). Currently I use the DTR pin to put XBee to sleep, and this configuration works perfectly. But I read that this is not a good solution as the mesh must be made again when a node leaves or joins network.

\n\n

DigiMesh 2.4 allows an Sleep mode with SM=7 or SM=8 where they sleep synchronously. When updating the firmware with XCTU, it does not show SM 7 and 8 options.

\n\n

Is there any other thing I should set, or is there something else I am missing?

\n", "Title": "ZigBee sleep problem", "Tags": "|zigbee|mesh-networks|", "Answer": "

You say you built a Zigbee network using Digimesh protocol. The thing is that this is an impossibility: Zigbee and Digimesh are competing solutions for mesh network, not something you use together.

\n\n

See this link: Digimesh manufacturer explains the difference between Zigbee and Digimesh

\n\n

Digimesh uses Sleep Modes 7 and 8 whereas Zigbee uses 4 and 5. Have you accidentally changed to use Zigbee instead of Digimesh? That would be the reason why you don't see SM 7 and 8 anymore.

\n\n

Edit:

\n\n

Looking docs of XB24c they have now also adopted SM numbers 4 and 5.

\n\n

Look:\n https://www.digi.com/resources/documentation/digidocs/pdfs/90001506.pdf

\n" }, { "Id": "2088", "CreationDate": "2017-09-17T02:31:47.050", "Body": "

I am making a home automation project based on star topology. What I am trying to achieve is that one of the nodemcu/ESP8266 acts as a server which is accessible for the outside world and other nodemcu/ESP8266 acts as clients which are connected to relays or sensors.

\n\n

Upon receiving the command from the server, the relays must be triggered accordingly and update the status back to the server. I read lots of tutorial via different methods. MQTT seems good but I don't want to use any third party broker like Adafruit. I want to host the web server either on my nodemcu or my web host. The sad part is I don't own a Raspberry Pi.

\n\n

Can I use one of my ESP8266 devices as an MQTT broker, or is there a suitable alternative?

\n", "Title": "Can I use an ESP8266 as an MQTT broker?", "Tags": "|smart-home|mqtt|esp8266|", "Answer": "

Check this out: https://github.com/martin-ger/uMQTTBroker

\n

It probably won't be as powerful as an Raspberry pi but it gets the job done.

\n" }, { "Id": "2092", "CreationDate": "2017-09-17T15:17:08.383", "Body": "

I gave Node-Red a short test-run this week. It is not clear to me whether it supports flows that encompass more than one request. Does Node-Red have a request-per-flow or a session-per-flow model?

\n\n

Having worked with data-flow based programming tools for Business Process Modeling (webMethods and Tibco), I see one of their key features is the ability to model sessions and workflows. These tools are, however, rather large for the purposes of most IoT projects so it would be great if something similar could be accomplished with Node-Red.

\n\n

A follow-up question, in case Node-Red does not support this, is whether there are some simple tools that do support graphical modeling of session flows?

\n", "Title": "Does Node-Red support multi-request flows (i.e. sessions)?", "Tags": "|software|system-architecture|visualization|node-red|", "Answer": "

The answer is no and yes.

\n\n

Flows in node-red are pretty static, there is no notion of instantiating a flow when the first requests comes in such that you might have an instance of a flow per request.

\n\n

There is also no built-in notion of a session that would allow you to associate messages flowing through flows with a session.

\n\n

However, you can relatively easily construct these things yourself. Node-red provides a notion of flow and global state, which is accessed using the flow and global objects, see https://nodered.org/docs/writing-functions#storing-data. What you would do is send a cookie to clients and then explicitly associate an incoming request with saved global or flow state. You can then write code that is \"session aware\" based on the saved session state. This works well in function nodes, but you will hit some issues with the built-in nodes that do things like rate limit or split & merge messages because these are not generally aware of the session notion.

\n\n

In the pizza example you would maintain the state of an order in the flow or global context and you would access the appropriate order's state based on the cookie value.

\n" }, { "Id": "2095", "CreationDate": "2017-09-17T17:06:58.083", "Body": "

What are the technical differences between Z-Wave and its Plus alternative? Is Plus just a stronger signal?

\n", "Title": "What's the difference between Z-Wave and Z-Wave Plus?", "Tags": "|zwave|", "Answer": "

The Z-Wave Alliance have a page explaining the differences pretty nicely:

\n
\n

Z-Wave Plus\u2122 is a new certification program designed to help consumers identify products that take advantage of the recently introduced 'Next Gen' Z-Wave hardware platform, also know as 500 Series or 5th Generation Z-Wave. [...]

\n

With the introduction of the Next-Gen, Z-Wave 500 series hardware platforms, Z-Wave saw its ecosystem bolstered with new capabilities, including increased range, extended battery life, Over The Air upgrading (OTA), additional RF channels and more \u2014 all of which are fully backwards compatible with existing Z-Wave products. [...]

\n

Features

\n\n
\n

Some miscellaneous differences are noted in a TechHive report:

\n
\n

The new chips also boast dramatically smaller packages. Sigma\u2019s SD3502, for instance, is a general-purpose Z-Wave system-on-a-chip that integrates a microcontroller, RF transceiver, 128-bit AES security engine, and memory in a package that measures just 7mm square.

\n
\n

From the perspective of a consumer, Z-Wave Plus' main benefits are battery life and range. Many hubs (e.g. SmartThings) already support Z-Wave Plus, and many sensors and other nodes support it too. The Z-Wave Alliance claim that Z-Wave Plus devices generally won't cost a lot more, either, so in that case, it's probably beneficial to pick Z-Wave Plus if possible.

\n

As a developer, the OTA updates and smaller chip are likely to be useful. I can't say much about the technical details, because the specifications are secured and require the signature of a non-disclosure agreement. If anyone has any additional information, I'd be interested to hear.

\n" }, { "Id": "2097", "CreationDate": "2017-09-17T20:09:01.083", "Body": "

Over on Stack Overflow there is a question about implementing request/response interaction over MQTT. As one answer notes, you can do it by publishing the request on one topic and listening for the response on another token that was included in the request. It's a little awkward, but it works.

\n\n

As MQTT is used extensively throughout IoT, I wonder, have there been any attempts to standardize this RPC type interaction for the sake of interoperability?

\n", "Title": "Is there a standardized RPC mechanism for MQTT?", "Tags": "|mqtt|standards|", "Answer": "

Azure IoT Hub has a concept of direct methods:

\n\n
\n

IoT Hub gives you ability to invoke direct methods on devices from the cloud.

\n
\n\n

This is implemented over MQTT (AMQP is not supported), where

\n\n
\n

Devices receive direct method requests on the MQTT topic:

\n
\n\n
$iothub/methods/POST/{method name}/?$rid={request id}.\n
\n\n

They've wrapped this in their SDK, so developers don't need to worry about specifically monitoring the topic. You could implement a similar approach.

\n\n

As per other answers, pub-sub does not lend itself to RPC, and there are not any standards, as far as I'm aware.

\n\n

More documentation is here.

\n" }, { "Id": "2102", "CreationDate": "2017-09-19T12:54:51.567", "Body": "

I would like to build a gateway device which will use the ESP32. This should connect to the ModBus TCP port of a Sensor. For this purpose, I would like to use the Modbus Rust implementation, which already exists. But there is very little information on how I could get Rust code running on the ESP32.

\n\n

Can anyone shed some light on this topic?

\n", "Title": "Working with Rust on the ESP32", "Tags": "|esp32|", "Answer": "

Xtensa have just released an official ESP32/Espressif LLVM backend and clang front end.\nSee their announcement here: https://esp32.com/viewtopic.php?p=38466\nRepos: https://github.com/espressif/llvm-xtensa & https://github.com/espressif/clang-xtensa\nAs rust is based on LLVM, this new ESP32 LLVM backend should help make Rust support for the ESP32 more likely. The announcement even hints at this future Rust support.

\n" }, { "Id": "2106", "CreationDate": "2017-09-19T18:13:07.960", "Body": "

Armis Labs lately revealed a new attack vector that attacks essentially all major OS including those used on IoT devices via Bluetooth. BlueBorne is reported to spread malware laterally to adjacent devices - which sounds pretty much like an IoT nightmare to me.

\n\n

According to Armis Labs' website the Linux based Tizen OS, a consumer-oriented platform for things like smart refrigerators, is affected. Things like my1 Samsung RB38K7998S4/EF are supposedly vulnerable.

\n\n

Given that any official patch to fix the bug may take Samsung some time, how can one secure the refrigerator meanwhile against BlueBorne?

\n\n

Is it possible to completely disable Bluetooth as a mere user? I.e. can one blacklist the core Bluetooth modules, disable and stop the Bluetooth service, and remove the Bluetooth modules as outlined in this general Linux question (How do I secure Linux systems against the BlueBorne remote attack?)?

\n\n
\n\n

1: Just kidding of course, I would not buy a fridge worth 3+ k\u20ac... but the question still stands.

\n", "Title": "How to secure a Samsung Smart Refrigerator against BlueBorne?", "Tags": "|security|bluetooth|", "Answer": "

That is one of the many problems with IoT devices: The operating systems are proprietary, and you do not have root access to them. Furthermore, disabling kernel modules is generally too complex for most users.

\n\n

Additionally there's a large number of models, and updates typically stops before the end of life for the product, leading to unpatched code in the wild. As they are internet connected, they may be attacked remotely, and may even participate in new attacks, as they are in essence full computers with a network stack.

\n\n

Bruce Schneier has written a good essay about IoT security where he hilights many of these problems.

\n\n

So TL;DR: As a consumer, there's nothing you can do.

\n" }, { "Id": "2107", "CreationDate": "2017-09-20T04:12:22.590", "Body": "

I have a telephone at home which is an electronic instrument connected via wire.

\n\n

It is not a cordless but a corded phone comes with a telephone wire.

\n\n

Is there a way to lock or unlock it via mobile or IoT? A telephone code can be used to lock the phone. However if I forget to do it then how to do it?

\n", "Title": "How to control my telephone via Mobile/IoT?", "Tags": "|smart-home|", "Answer": "

Yes you could add a relay that disconnect the telephone wires and control that with a IoT device or ex. a Arduino with a a GSM module like SIM800/900 and control the relay.

\n" }, { "Id": "2116", "CreationDate": "2017-09-21T14:35:06.320", "Body": "

The Background

\n\n

I'm prototyping some basic home automation software using Windows IoT Core and Azure.

\n\n

I have built a Windows Universal Application that sends data to a Web App hosted in Azure via Web API. (token based auth)

\n\n

The Problem

\n\n

I don't want people sniffing my network and trying to breach either the Pi or the WebApp/database!

\n\n

- The data sent via API is very sensitive and should be untraceable.

\n\n

Is it possible? If so, how?

\n\n

I've done some research into ways to secure a Pi by creating a 'Tor Hidden Service'.

\n\n

This video explains how to set up a Hidden Service when running a LINUX based OS.

\n\n

THE QUESTION

\n\n

How would you go about creating a Tor Hidden Service in Windows IoT Core, and route the Universal Application traffic through Tor to Azure?

\n", "Title": "Creating a Tor 'hidden service' on Windows IoT Core", "Tags": "|security|raspberry-pi|microsoft-windows-iot|azure|", "Answer": "

Tor seems overkill for what you're trying to do. If you want to prevent sniffing of data on the network, simply communicate with the server using SSL by installing a certificate on the server. You may also want to pay more attention to firewall rules to block incoming traffic using

\n\n
\n

netsh advfirewall firewall

\n
\n\n

The pi is not very secure if anyone can get physical access as the SD card can be removed/replaced, so Tor won't help you much there. With physical access, it is possible for someone to get access to the token that you use for the service - it may be difficult but they could find it on the SD card.

\n" }, { "Id": "2123", "CreationDate": "2017-09-22T15:26:28.500", "Body": "

I\u2019ve been doing a fair bit of DIY home automation (RF; 433MHz) lately across many different devices - which worked well for all except one. It's basically a pool robot with some really crappy remote control.

\n

I collected some data using a BladeRF SDR and GNU Radio. The "other 3" column is basically the action, while "other 1" seems to be some serial and "other 2" defines the robot if you have multiple in use I guess (some friend of mine who has the same has a different value there). I'm not sure what purpose the count suits but I would guess it's so the robot knows when the range gets too wide eventually (missing some information?). I've narrowed down the bytes and their meanings, however I fail at calculating the correct CRC (checksum) for the data.

\n

OLD - PLEASE SEE UPDATE BELOW!!

\n

Here is some sample data:

\n
<other1                  > <other2> <other3> <count > <crc   >      \n10110100 00111110 10001111 11001000 00000001 11110111 01011110\n10110100 00111110 10001111 11001000 00000001 11111000 01010011\n10110100 00111110 10001111 11001000 00000001 11111001 01010100\n10110100 00111110 10001111 11001000 00000001 11111010 01010001\n10110100 00111110 10001111 11001000 00000001 11111011 01010010\n10110100 00111110 10001111 11001000 00000001 11111100 01010111\n10110100 00111110 10001111 11001000 00000001 11111101 01011000\n10110100 00111110 10001111 11001000 00000001 11111110 01010101\n10110100 00111110 10001111 11001000 00000001 11111111 01010110\n10110100 00111110 10001111 11001000 00000001 00000000 01100111\n10110100 00111110 10001111 11001000 00000001 00000001 01101000\n10110100 00111110 10001111 11001000 00000001 00000010 01100101\n10110100 00111110 10001111 11001000 00000001 00000011 01100110\n10110100 00111110 10001111 11001000 00000001 00000101 01100100\n10110100 00111110 10001111 11001000 00000001 00000111 01100010\nadded data:\n10110100 00111110 10001111 11001000 00000010 00000110 01100100\n10110100 00111110 10001111 11101010 00000010 01100101 10011010\n10110100 00111110 10001111 11101010 00000001 01100100 10011100\n10110100 00111110 10001111 11101010 00000001 01100011 10011101\n10110100 00111110 10001111 11101010 00000001 01100110 10011010\n
\n

There is a count for each request that must be changed and some commands to be sent, e.g. the "other 3" column could read 00000010 instead of 00000001.

\n

It would be very helpful if somebody could give me some hints on where to look at. I've tried different techniques like XOR across the bytes or calculating modulo etc. - I even tried different CRC algorithm brute force tools - unfortunately to no success yet.

\n

EDIT: I've put the data into excel and added some function (it basically compares each 4 bit with the ones from above - the last transmission). I've done that as I recognized the CRC stayed the same once. This was the case when both action and count were raised by 1. Please have a look:

\n

\"data\"

\n

UPDATE:

\n

I've found some other more detailed spec. from the same vendor on the net after searching for hours and it came out the so thought CRC is in fact a parity. I also fine tuned my gnu radio capture flowgraph and collected some new data. Please disregard the data above and have a look here:

\n
other 1> other 2                > other 3> other 4    > parity\n10110100 001111101000111111101010 00000001 011110101001 0101\n10110100 001111101000111111101010 00000001 011110111001 0110\n10110100 001111101000111111101010 00000001 011111001001 0011\n10110100 001111101000111111101010 00000001 011111011001 0100\n10110100 001111101000111111101010 00000010 011111101001 0100\n10110100 001111101000111111101010 00000010 011111111001 0011\n10110100 001111101000111111101010 00000010 100000001001 0011\n10110100 001111101000111111101010 00000010 100000011001 0100\n10110100 001111101000111111101010 00000001 100000101001 0100\n10110100 001111101000111111101010 00000001 100000111001 0011\n10110100 001111101000111111101010 00000001 100001001001 0110\n10110100 001111101000111111101010 00000001 100001011001 0101\n10110100 001111101000111111101010 00000010 100001101001 0101\n10110100 001111101000111111101010 00000010 100001111001 0110\n10110100 001111101000111111101010 00000010 100010001001 1011\n10110100 001111101000111111101010 00000010 100010011001 1100\n10110100 001111101000111111101010 00000001 100010101001 1100\n10110100 001111101000111111101010 00000001 100010111001 1011\n10110100 001111101000111111101010 00000001 100011001001 1110\n10110100 001111101000111111101010 00000001 100011011001 1101\n
\n

And here is it again as fancy excel:

\n

\"enter

\n

Does anybody know how to calculate that parity? I've tried splitting the data etc. and using usual parity calculations but unfortunately with no success yet.

\n", "Title": "Parity calculation problem", "Tags": "|smart-home|data-transfer|interfacing|", "Answer": "

Oh man, don't ask me how but I think I figured it out.

\n\n

Let's have a look:

\n\n

\"enter

\n\n

Basically you split the data up into packets of 4 bits each. You then concat each first, second, third and fourth letter together separately. This can be seen in the 1, 2, 3 and 4 columns. Afterwards you count the 1s in each of them (the number of ones is written beside each of them). If they are even it's a 0 for the parity bit, if they are odd it's a one. So before you are finished you now have to binary add 1 to the result from before (!). That matched every single time and I was successfully able to generate my own frames that way. Problem solved it seems. Perfect. Thanks a lot everybody for contributing.

\n" }, { "Id": "2133", "CreationDate": "2017-09-25T12:29:04.027", "Body": "

There are two common platforms which seem very common for IoT projects: Arduino Uno and Raspberry Pi. How would I decide which one would be most suitable for a specific project?

\n\n

I haven't got a specific problem in mind, rather I'm trying to understand what the difference between these types of product are, and how I should go about starting to chose the best hardware for building a project if I want to do it 'right'.

\n\n

To clarify on the 'different models of Raspberry Pi', this question is more about the OS that the platform runs, Bare-metal/RTOS, or a conventional Linux distribution. Accept that within both SBC and MCU categories, there is a large spread of performance and peripherals to chose from in focusing on any one precise device.

\n", "Title": "Arduino Uno Vs Raspberry Pi", "Tags": "|hardware|", "Answer": "

Personally I use an Arduino for prototyping an IoT idea. There are far fewer overheads to getting a basic concept up and running plus the licenced 3rd party boards are extremely cheap. Once the idea has been proven I then migrate it over to a Pi, which in itself is a challenging activity. There are probably people out there who do it the other way round but from a hardware perspective, Arduino always wins for me as it's had built in analogue to digital capability (something sadly lacking from the Pi).

\n" }, { "Id": "2147", "CreationDate": "2017-09-29T07:58:15.020", "Body": "

I would like to set up the assistant so that I can have a conversation with it.

\n\n

Right now, it gives :

\n\n\n\n

I would like to find a way so that I do not need to say the trigger word every time, that would then give.

\n\n\n\n

Is that possible? if yes, can I set the delay during which the Google home remains listening.

\n", "Title": "How to ask Multiple questions to Google Home", "Tags": "|google-home|google-assistant|", "Answer": "

This is now possible with the continued conversations feature is deployed, announced at Google I/O 2018.

\n\n

If continued conversations are enabled on your Google Home, the microphone will continue to listen for 8 seconds after the reply (or until you say \"thank you\", if you wish to prematurely stop it listening). A demonstration of the new flow is avaiable here, at AndroidPolice, where it's demonstrated that the following conversation could work:

\n\n
\n

Hey Google, did the Warriors win?

\n \n

Yes, the Warriors won 118 to 92 last Sunday against the Pelicans.

\n \n

Nice, when's their next game?

\n \n

The Warriors' next game is today at 7:30pm, where they will be playing the Pelicans.

\n
\n" }, { "Id": "2150", "CreationDate": "2017-09-29T19:28:57.030", "Body": "

I have just started investigating smart lights.

\n\n

I am interested in using some Wi-Fi sockets in some lamps in my office to automate the lights.

\n\n

I am curious if there is a way to turn my lights on from my PC, and ultimately turn them on when my Linux machine wakes from suspension. Then turn them off when the Linux machine suspends.

\n\n

Are there Wi-Fi sockets that use a certain messaging protocol that is open, for which I could write an app to use with them?

\n", "Title": "Is there a way to control my lights from my Linux desktop?", "Tags": "|smart-home|wifi|linux|lighting|", "Answer": "

If you are using wiz connected lights then this (Control Wiz Connected Lights Through Gnome Shell) Gnome shell extension can help you out.

\n" }, { "Id": "2160", "CreationDate": "2017-10-02T16:30:26.580", "Body": "

Is it possible to install and run Windows 10 IoT on an \"regular\" workstation/device/PC? With this I'm talking about an old laptop which I want to use as an IoT Device just running a website (or maybe an app) [display unit of my smart home system] but don't need all the overhead from Windows itself.

\n", "Title": "Windows 10 IoT on \"regular\" PC?", "Tags": "|microsoft-windows-iot|", "Answer": "

Yes, Windows 10 IoT runs on any x86 or x64 processor running faster than 400 Mhz and 256 MB of RAM. The SoC compatibility list is for the unusual and non-x86 processors that it supports.

\n\n

Windows IoT Core only supports running a single UWP app that launches at startup, which could work for your webserver use. IoT Enterprise supports a more typical Windows desktop experience.

\n\n

Windows 10 Home is less resource intensive than Win 7 and usually performs about as well as Windows XP, which as released in 2001, so if your machine is < 15 years old then plain old Windows 10 should function on it.

\n\n

Windows IoT Hardware Requirements

\n" }, { "Id": "2162", "CreationDate": "2017-10-02T18:49:15.377", "Body": "

I want to build an audio system for my apartment, that would have speakers in multiple rooms, and that would activate corresponding speakers automatically upon entering a room. It would have to be connected to Spotify, AirPlay and support mini jack input.

\n\n

Is this possible? Is there some system that has this implemented already (I saw Sonos has multi-room support, but not the automation that I think of). I can do some coding if needed.

\n\n

The ideal scenario would be system made of some motion/infrared sensors, that would remember which ones activated in what order, therefore it can memorize if there is somebody in the room or not. I think I'll choose HomeKit for my system, so I expect that it will be able to start playing Spotify playlist automatically on some HomeKit trigger. The example of the above system is: I listen to a podcast in the living room, but when I go to the bathroom is starts playing there as well, as long as I'm there.

\n\n

A project expanding this one I have in mind, is automatically playing a specific Spotify playlist when I get home.

\n", "Title": "Creating smart multi-room audio system", "Tags": "|smart-home|audio|sound|", "Answer": "

Requirements: Multiple rooms, spotify, airplay mini-jack.

\n\n

Functionalities: Activating speakers on presence/motion. Playing Spotify when you get home.

\n\n

This is actually a very broad question that can have a very broad set of answers. However, let me explain how I would do this.

\n\n
\n\n

Audio

\n\n

For up to about 4 rooms, you could use an amplifier that has different \"zones\". Be sure to get an amplifier that has an open API.

\n\n

You can wire the speakers up that each \"zone\" is connected to one room. And you can simply control the volume for each room to when you walk in/out.

\n\n

If you have multiple amplifiers, you can use the output of one amplifier as an input to the other amplifier. Then you'll have to switch to the correct inputs when changing rooms.

\n\n

Another option could be using a setup like \"MusicCast\" where one amplifier streams his music to others. But I haven't yet found a fancy way to control MusicCast.

\n\n
\n\n

Motion detection

\n\n

There really are hundreds of devices for this. You could get an ESP8266 (wifi module) and hook up an PIR module. If you program the ESP8266 to send a volume \"80%\" command to the amplifier whenever it registers motion (and send mute when it doesn't register for quite some time) you have a very cost effective setup.

\n\n

However, you should look into having a central (Home automation) server to parse all inputs and control the amplifiers, because the behaviour will be easier to change through the servers' interface as when you're using dedicated modules themselves.

\n\n

I would suggest using Node-red with Yamaha or Denon modules and MQTT. Your motion sensors would publish MQTT messages for it's motion. node-red will parse them and apply logic, to control the amplifier. This way you can \"reprogram\" your logic, withouth having to reprogram the ESP8266's. Or you could more easily integrate an app (there are a lot of MQTT button apps).

\n\n
\n\n

I must note that the company I work for is affiliated with Yamaha and thus my answer may be a bit biased. But you should always opt for an amplifier that has an open API (Denon's have it as well) if you want to control the amplifier over IP.

\n" }, { "Id": "2170", "CreationDate": "2017-10-04T10:52:10.447", "Body": "

Right now I'm working on architecture for application that will manage smart sockets and I need advice. This is what I have for now:

\n\n

\"system

\n\n

I'm not sure how to handle connection between Spring and smart socket (I want to be able to turn on/off socket from my web app). I communicate with my sockets via HTTP. My idea is to have server that will manage connections and commands between Spring and sockets:

\n\n
    \n
  1. When socket will be turned on, it will try to connect to the server.
  2. \n
  3. Server will wait for new connections from smart sockets, to save them. Also it will wait for commands from Spring to change state of the socket.
  4. \n
\n\n

Is it a good idea? If yes, is there any tool that will help me do that (maybe build in in Spring), or should I write it on my own?

\n", "Title": "Server to handle connections with smart sockets", "Tags": "|system-architecture|smart-plugs|", "Answer": "

You have already capability to publish data over MQTT protocol, even a broker and way to forward certain requests to Spring from the socket.

\n\n

I see no point of taking the responsibility of knowing whether socket is plugged for some requests for Spring. MQTT broker does that under the hood, you don't have to invent the wheel again.

\n\n

So, I would create some more publishers and subscriptions to Spring and socket and use MQTT as the protocol there, not touching to http this time at all.

\n" }, { "Id": "2180", "CreationDate": "2017-10-07T15:31:31.197", "Body": "

I wonder what the difference between the Amazon Echo and the Echo Plus is. Of course I know that the Echo Plus has a Smart Home Hub integrated but for what exactly is it? I can not find any information about that on the internet.

\n\n

I am asking because I want to buy an Amazon Echo/Plus for some simple automation tasks which involve, for example, switching the lights on and off or playing music. It would also be great if I could regulate my heater and so on when I am not at home.

\n\n

Do I need the Plus version or is the normal version suitable for my purposes?

\n", "Title": "What is the difference between Echo and Echo Plus?", "Tags": "|smart-home|alexa|amazon-echo|", "Answer": "

As far as I can tell from the documentation, the Echo Plus will at least have a ZigBee radio and the ability to act as a hub for those devices, according to Simple setup devices compatible with Echo Plus:

\n
\n

Echo Plus has a built-in hub that seamlessly connects and controls ZigBee smart devices such as light bulbs and plugs without the need for separate hubs or apps. With simple setup, connecting Echo Plus to the devices below is easy. Just say "Alexa, discover my devices" and Echo Plus will discover and set up your devices.

\n

Like other Echo devices, Echo Plus can connect to hundreds of Wi-Fi and Bluetooth smart home devices with the Alexa app, such as lights, outlets, TVs, thermostats, cameras, and more. Shop all smart home products.

\n
\n

It sounds like the software will also be upgraded to match expectations of a smart hub, according to the info page:

\n
\n

Group multiple actions together at scheduled times or with a single voice command, like securing your home by locking the doors and turning off the lights when you go to bed.

\n
\n

The scheduling and grouping would seem to reduce the need for a hub if you're just setting up a basic home automation system.

\n
\n

For you, if you're hoping to control lights, it might be worth having an Echo Plus. Some of the lighting devices that support the new 'simple setup' might be interesting to you, particularly if you don't already have any hubs set up. However, many smart lights and switches are already compatible with the original Echo, so check with the products you're interested in to see if they'd benefit from an Echo Plus.

\n" }, { "Id": "2184", "CreationDate": "2017-10-08T07:26:04.990", "Body": "

I have only the 4 pin USB TTL cable which has the Rx, Tx, Vcc and GND pins. The 6 pin includes the CTS and RTS pins also, and is specified for use with Intel Galileo board. My question is if I can still use it for normal serial data communications and what will be the difference?

\n\n

Cable specifics mentioned here:\nhttps://www.intel.com/content/www/us/en/support/boards-and-kits/intel-galileo-boards/000006343.html

\n", "Title": "Is it okay to use the 4 pin USB TTL cable instead of the recommended 6 pin USB TTL cable for Intel Galileo?", "Tags": "|microcontrollers|linux|microsoft-windows-iot|intel-galileo|", "Answer": "

You can

\n\n

But for high speed and reliable communication, its best to use the handshaking lines.

\n" }, { "Id": "2199", "CreationDate": "2017-10-14T07:06:46.047", "Body": "

As I understand ZigBee is only a specification of a data transfer protocol. So I was expecting to find some library that implements this protocol to use it with my MCU and RF transceiver. Instead, I only have found specific ZigBee devices (e.g. XBee).

\n\n

What I'm asking is: Can I implement the ZigBee protocol using only an MCU and RF transceiver?

\n\n

If not, what hardware do I need to create a ZigBee node?\nAre there any ZigBee libraries that I can use with generic hardware?

\n", "Title": "Can I implement ZigBee with generic hardware?", "Tags": "|microcontrollers|protocols|zigbee|data-transfer|", "Answer": "

You could, but you'd need a radio transceiver with compatible frequency range, modulation, and data rate.

\n\n

Typically radios with those capabilities are either sold as Zigbee radios, or for the underlying 802.15.4 layer. Sometimes they can do some additional custom modes of communication as well (though often available software stacks force you to pick a mode at compile time)

\n\n

Truly \"generic\" radios are typically \"software defined\" with early conversion of the RF or IF signal to digital, computational signal processing, and then a conversion back to IF or RF if there is a transmit path. While you could speak Zigbee with a suitable SDR, the hardware tends to be a bit expensive and power hungry for typical embedded applications compared to a radio specifically designed for 802.15.4

\n" }, { "Id": "2208", "CreationDate": "2017-10-15T21:07:49.543", "Body": "

Just like what the question asks, can multiple subscribers subscribe to the same topic reading the same message from AWS IoT?

\n", "Title": "Can multiple subscribers subscribe to the same topic from AWS IoT?", "Tags": "|mqtt|aws-iot|aws|publish-subscriber|", "Answer": "

Yes. AWS IoT uses MQTT, which follows a topic-based publish-subscribe pattern. This allows multiple subscribers to a topic, and multiple clients can even publish to the same topic (a topic is not specifically designated for one client to publish or subscribe to).

\n\n

To subscribe, a client must send a SUBSCRIBE packet:

\n\n
\n

The SUBSCRIBE Packet is sent from the Client to the Server to create one or more Subscriptions. Each Subscription registers a Client\u2019s interest in one or more Topics. The Server sends PUBLISH Packets to the Client in order to forward Application Messages that were published to Topics that match these Subscriptions. The SUBSCRIBE Packet also specifies (for each Subscription) the maximum QoS with which the Server can send Application Messages to the Client.

\n
\n" }, { "Id": "2222", "CreationDate": "2017-10-26T07:12:03.303", "Body": "

I have a GPS Tracker application which periodically sends Latitude, Longitude, Altitude to a LoRa Gateway.

\n\n

I am using the The Things Network and sending the data via the OTAA method which provides me the Application EUI and Application Key that can be used to program my LoRa device to connect to the Gateway.

\n\n

But is it possible to connect to a new LoRa Gateway with the same GPS Tracker which for instance, can be placed in some other location of the city?

\n", "Title": "Is it possible to share an Application between two or more LoRa Gateways using TTN?", "Tags": "|lora|lorawan|", "Answer": "

Indeed, the whole point of \"The Things Network\" (TTN) is that multiple LoRa gateways are used to transfer messages between LoRa radio signals and the Internet based routing.

\n\n

And these don't even have to be gateways owned by you - by registering your device on TTN you have access to all gateways in the public system, and by making your gateway part of TTN, it is available for all other users' nodes.

\n\n

Otherwise you'd just have a point-to-point LoRa link or a private LoRaWan, not \"The Things Network\".

\n" }, { "Id": "2239", "CreationDate": "2017-11-02T16:53:21.563", "Body": "

I'm about to start an investigation about people movement after a surgery (walk basically). I would like to know where should I start to look for movement sensors (pedometer/accelerometer). This sensor basically has to be installed on the patient and has to monitor his movement.

\n\n

I need something affordable, efficient, lightweight and non intrusive. I don't know where I should start to search for sensors.

\n\n

This sensor also needs to be accessed from outside, for example: a custom iOS app that asks for the sensor information and store information on a database on the cloud.

\n\n

The sensor has to be accessed via wireless (bluetooth or lte) or some kind of api.

\n", "Title": "Motion sensor to monitor the movement of people", "Tags": "|hardware|sensors|wireless|bluetooth|", "Answer": "

You have the options of either using an activity service based on an existing device (i.e. a phone), or reverse engineering an existing device (fitbit or derivative).

\n\n

This is well established technology, you're likely to find some patented ideas, and some open source code relating to the signal processing.

\n\n

The actual sensor ought not to be posing a challenge - accelerometers are not new or novel.

\n\n

Here is a micro:bit stepometer lesson plan, using a cheap board which has all the hardware you need to prototype with. The lesson even has an extension which covers building a commercial product. This hardware is an mcu, BLE, accelerometer and a few LEDs (basically an instance of cujo's answer) but I think it is the lesson plan that you are really looking for.

\n" }, { "Id": "2245", "CreationDate": "2017-11-04T07:12:16.900", "Body": "

I need a device which can remotely tell me its location, either in realtime or with some delay. So, I naturally googled for GPS transmitters. The top product has much too large a thickness (I need it so that, it can be undetected through a few layers of cloth on both sides).

\n\n

Then I thought of a small mobile phone, with which I would use Android Device Manager to locate. But problem is, internet connectivity is gonna be scarce. Nonexistent. So not that, either.

\n\n

Here's my situation:

\n\n

A group of objects is going to be delivered to a group of persons, by means of placing said objects in a large sack, or similar. I need to track this sack. Therefore, I must place in it a device that transmits to me its location. Since it needs to be undetected, it must be thin. Since networks will be nonexistent, it must report over some other long-distance network, or, alternatively, store a list of locations every [30 mins/hour/few hours] which are transmitted when it finally reaches network connectivity. This device must have a battery life of a few days.

\n\n

That it be undetected is imperative. Should it be discovered, there will be loss of life. The bags may be similar to this or this.

\n\n

So, is either my GPS or Mobile idea usable? If not, can you suggest anything?

\n", "Title": "GPS Transmitter that works without Mobile Networks", "Tags": "|gps|", "Answer": "

I suggest the Light Bug.

\n\n

It features:

\n\n\n\n

The only thing I have not managed to determine is what happens if it loses contact with cell towers. I don't know if it would log location data points and retroactively transmit them. However, I have a feeling you'll have a hard time getting anything better than that, without designing something custom from a custom chip.

\n\n

It may theoretically be possible to flash your own software to the light bug, but I failed to find any information on that, and being a proprietary device, it is fairly doubtful.

\n" }, { "Id": "2249", "CreationDate": "2017-11-05T09:51:24.983", "Body": "

With IoT devices typically being built with low profit margins and low power specifications, functionality is typically limited to that which is needed. But for a device that is expected to last a number of years, there will be security vulnerabilities and issues that need fixing (see the Mirai botnet as an example)

\n\n

As an IoT manufacturer, how can I enable patching or upgrading of encryption algorithms or security protocols remotely, or simply ensure that the device is kept secure? What standards should I follow?

\n", "Title": "Standards for keeping devices' security up-to-date", "Tags": "|security|over-the-air-updates|", "Answer": "

If the firmware of your device can be made less complex than the bootloader required for a secured remote update, then do not implement remote update.

\n\n

I know the consensus is to have a secured and robust bootloader, with strong public crypto authentication, safe rollover mechanisms, maybe a basic network stack, and then put on top of that a RTOS, with a full IP+TLS network stack, then add on top of that your application. This is pure insanity for a low-cost low-power device. IMHO, this leads to products that are updated every week or so, which tend to bother users because sometimes updates start at the wrong moment, fail or break something. Updates drain a lot of power too, so user have to charge more often. And security is still far from guaranteed as the attack surface is large.

\n\n

Your device is doing basic sensing/actuating, maybe some local triggering/displaying but not much? Skip all that.

\n\n

Write bare metal code, use a very basic stack, audit it thoroughly, do some formal verification if possible. And then you can be relatively confident that your device will not have security issues for the next decade.

\n\n

If all you have is a hammer, everything looks like a nail. And that's why most coder try to write code to secure their unsecured existing code. Writing less code doesn't always come naturally.

\n" }, { "Id": "2257", "CreationDate": "2017-11-06T20:43:32.763", "Body": "

Is there a way to relatively easily modernise an old door opening system in the office (like on the image) and make it work with IoT? Build an app to open the door (in the first place) and if possible even build an Alexa skill (not necessary).

\n\n

Or maybe it would be easier to install a new system (IoT) with a new lock, etc?

\n\n

Do you know any systems like this that you can recommend?

\n\n

\"Old

\n", "Title": "Modernise old door opening office system", "Tags": "|sensors|door-sensor|", "Answer": "

While it will be possible to cut into the wires and build a device that will publish state and trigger the solenoid that opens the door, it's probably not a good idea.

\n\n

Firstly you have no idea what voltages it all works at, it could very well be mains AC power.

\n\n

You then have to design a network service that securely exposes the interface for the app (assuming mobile phone). This is most likely internet facing (definitely for an Alexa Skill) so needs to be done properly to stop just anybody opening the door.

\n\n

There is also the question of the audio, that will need something more powerful than a esp8266 as it will need to do audio encoding/decoding in near real time then forwarded to the app.

\n\n

So yes it is all possible, but probably not worth the effort unless you really like a challenge. You would be better buying off the shelf solution. I've not looked at combined systems but something like the Ring doorbell with camera and app deals with knowing who is at the door. Then a number of companies make smart locks, e.g Yale or Abloy.

\n\n

And if you are truly mad you could always sign up for Amazons new Key service that lets Amazon let their delivery drivers into your house when your not there....

\n" }, { "Id": "2284", "CreationDate": "2017-11-14T20:44:33.087", "Body": "

Wikipedia on my mother tongue Finnish about Internet of Things (sorry, in Finnish language, but I'll translate the essential) mentions IoT as a synonym or closely related to Industrial Internet.

\n\n

As translated to English the paragraph starts:

\n\n
\n

Internet of Things (in English Internet of Things, more shortly IoT, also Industrial Internet)...

\n
\n\n

Industrial Internet Consortium describes Industrial Internet so lightly, that the butterfly picture on the page was the most concrete description of the term itself (that means I have no clue about what the concept means after reading, only the definition seems similar to that of every IoT ads Google can tell me about IoT). So I am confused with these two terms.

\n\n

Are the terms in any connection together, or is the Wikipedia writer just mixing concepts? I like to get answer on conceptual and definitional level, if in English there is some reference what the terms mean and how they differ each other.

\n\n

[1] https://fi.m.wikipedia.org/wiki/Esineiden_internet

\n\n

[2] http://www.iiconsortium.org/about-industrial-internet.htm

\n", "Title": "Industrial Internet vs IoT", "Tags": "|definitions|", "Answer": "

It looks like the 'Industrial Internet Consortium' have simply shortened the name from something slightly more self-explanatory: Industrial Internet of Things. Note in their footer:

\n
\n

The Industrial Internet Consortium is the world\u2019s leading organization transforming business and society by accelerating the Industrial Internet of Things (IIoT).

\n
\n

In a nutshell, IIoT is just the use of 'smart' devices, sensors and machinery in industry, rather than in the home environment for consumers.

\n

For a real-world, tangible example, take Rolls-Royce, who, among other things, produce aircraft engines. It was announced relatively recently that they were equipping their latest engines with sensors in order to monitor performance remotely and predict engine faults before they happen. This is done by transmitting sensor data to Azure IoT Suite and processing it from there, in order to make business decisions without having to physically check the engines.

\n

For a specific definition, I find TechTarget's article pretty useful:

\n
\n

The Industrial Internet of Things (IIoT) is the use of Internet of Things (IoT) technologies in manufacturing.

\n

Also known as the Industrial Internet, IIoT incorporates machine learning and big data technology, harnessing the sensor data, machine-to-machine (M2M) communication and automation technologies that have existed in industrial settings for years.

\n
\n

An article on Electronic Design discusses the differences between consumer, commercial and industrial IoT. The key point they raise is:

\n
\n

IoT systems and platforms are not created equal. The types of communication and operations used by consumer, commercial, and industrial IoT are very similar if not identical. The differences concern those who procure, and are allowed to procure, information or control within the system.

\n
\n

In brief: The Industrial Internet is just IoT concepts applied specifically to industry and manufacturing rather than any other application. This can, of course, bring benefits in many situations to business (although, as with much of IoT, it's important to actually evaluate whether connecting everything to the Internet is a good idea, or even useful at all!)

\n" }, { "Id": "2286", "CreationDate": "2017-11-15T08:55:30.503", "Body": "

I'm trying to connect to the NB-IoT network in my city, and I've already gotten the hardware as well as the confirmation that there is an NB-IoT cell tower in my neighborhood.

\n\n

Naturally, there's an NB-IoT SIM card as well that is used to connect to the network, but I have not received that unfortunately.

\n\n

Question: Is it somehow possible to connect to that guardband network which piggybacks (CMIIW) of the LTE network by using a 4G/LTE SIM card though? Since if I recall correctly, the base station only has its firmware renewed so that the NB-IoT radiowave can be recognized.

\n\n

Additional question: There's SIM 800 and SIM900 shield for GSM modules. Can I theoretically use a 4G/LTE SIM card and utilize the 2G Network? I'm not quite sure I can just ask the kiosk owner next to my house the question \"Hey is this data only 4G card 2G compatible?\" cause I'm also not quite sure what that means.

\n\n

So, if I were to use the 3G 4G module, like this one for example...the communications type is LTE, but the transmission type is GPRS and EDGE (and HSP+ as well as LTE), according to the data sheet. AFAIK, that transmission is the architecture of the network as it travels from the base station towards the internet (CMIIW), but would our type SIM card be relevant for that?

\n\n

One more question I've never used a module with an embedded SIM before. How does that work? Am I subscribing to an ISP until a certain time? I'm rather new with cellular modules for microcontrollers, I only understand the connection service like how our mobile phones work.

\n", "Title": "2G-4G SIM Cards for Cellular Modules, specifically the new NB-IoT", "Tags": "|gsm|", "Answer": "

You probably already know more about this, than even 3GPP does. Looking at their documentation, which is not to be read by the faint-harted, it seem that NB-IoT is using a new special category of the LTE Advanced Pro, in 3GPP Release 13. Again, note that release 13, does not mean LTE UE Cat 13, but the new category NB1. For further info, look here and here, and links therein.

\n\n

So this seem to indicate that in practice, your hardware (and its radio firmware) must support LTE Cat NB1 and you must use a SIM compatible with that network. Thus, for the low-end hardware you mentioned, it is probably not supported. If you want to do more magic than that, you may consider getting into SDRs.

\n\n

Q: What is an embedded SIM (eSIM/eUICC)?

\n\n

A: Please post your questions separately... But, it's just a software emulated SIM, where the SIM file system is all handled by software. (Did you even bother to google for it?)

\n" }, { "Id": "2309", "CreationDate": "2017-11-23T13:36:48.187", "Body": "

A friend told me that Apple is finally releasing a competitor for the Google Home and the Amazon Echo, called the Apple Homepod. A lively discussion ensued, in which several people were quite certain that the Apple Homepod has far fewer features than either the Amazon Echo or the Google Home.

\n\n

What will the Apple Homepod actually feature? How versatile will it be? Will it actually be less feature laden than the Google Home and the Amazon Echo?

\n", "Title": "What will the Apple HomePod be able to do?", "Tags": "|smart-home|amazon-echo|google-home|apple-homepod|", "Answer": "

Okay, done a bit of research, and here's what I've found:

\n

TheStar.com writes:

\n
\n

As a result, when the $350 (U.S.) gadget debuts early next year (Apple recently delayed the launch from December), the HomePod won\u2019t be able to do many of the things the Echo can. Amazon offers thousands of \u201cskills\u201d (voice-activated apps) that let users do a range of things (including buy stuff from Amazon). The Google Home, which debuted earlier this year, is similarly endowed.

\n

The HomePod will be mostly limited to playing tunes from Apple Music, controlling Apple-optimized smart home appliances and sending messages through an iPhone.

\n
\n

Looks like my friends were right: it's basically going to be a music playing device with none of the impressive capabilities of the Google Home or the Amazon Echo. It will have the capacity to send messages through a synced iPhone, but it won't be able to answer your questions like Alexa or Google.

\n" }, { "Id": "2315", "CreationDate": "2017-11-26T09:17:11.517", "Body": "

I understand I need the Philips Hue Bridge to control Philips Hue lights.

\n\n

Let's say I set everything up and configure a light to go on/off on a schedule (turn on light in evening, turn off in morning). Then I disconnect the Bridge (unplug/power off). Will the light still go on/off on the schedule, or do I need the Bridge connected for this to work?

\n\n

I guess what I'm asking is: Is the light itself aware of the time and schedule, or is it the Bridge which knows about the time/schedule and just sends on/off instructions to the light?

\n", "Title": "Can Philips Hue bulbs run schedules while the bridge is off?", "Tags": "|philips-hue|bridge|", "Answer": "

Nice question, love it! I've found some information on reddit. Someone asked,

\n
\n

[I am] wondering what happens if my network goes out. Can the lights and switches operate without the Bridge?

\n
\n

The answer given was,

\n
\n

No. The bridge is essential to controlling the lights. Without a bridge they are "dumb" lights that merely invoke their default behavior when powered on: bulbs to a warmish white light and light strips to their last activated colour.

\n

Without the bridge any kind of color control, or any other behavioral control whatsoever beyond turning on/off by flipping the light switch or unplugging, is entirely unavailable.

\n

The bridge is the brain of the operation. The bulbs just receive instructions from it; they have zero intelligence of their own. Your smartphone can be used to program/control the bridge, but cannot in any form address the lights directly: they speak a different protocol altogether from wifi called ZigBee.

\n
\n

In other words, the light does not know the schedule; it is very much the bridge that sends the instructions to the light. The light is basically an endpoint in the system with no intelligence of its own: it depends on the Bridge system to turn it on and off.

\n" }, { "Id": "2317", "CreationDate": "2017-11-26T12:12:46.827", "Body": "

Is there a way to turn on and off fluorescent strip lights using google home.

\n\n

Most IoT light solutions I have found require bulb replacement, are dimmer switches or are not compatible with this type of lighting. The current switch controls 2 different strip lights to make things more complicated. We are in the UK.

\n", "Title": "Smart fluorescent strip lights", "Tags": "|smart-home|google-home|lighting|", "Answer": "

To control fluorescent lights, you should look for a device called an \"appliance switch\". This will control any load by switching the power on or off; it does not \"dim\" the load. With it, your fluorescent lights will work just like your ordinary switch does today.

\n\n

Be aware that home automation systems generally do not consider appliance switches to be in the same category as \"light\" switches. This is only an issue if you say \"Hey Google, turn all the lights on\" and expect this switch to be included. But you can name it \"bedroom light\", and say \"Hey Google, turn the bedroom light on\", and it will work.

\n\n

(Also note that some very old fluorescent light fixtures have a manual starter. You must press and hold a button until the lights come on. If that's what you have, no commercial home automation electronics are designed to control them.)

\n" }, { "Id": "2319", "CreationDate": "2017-11-26T17:58:21.457", "Body": "

I want to write an app that will relay information about public transport via the Google Assistant if prompted.

\n\n

Does the Google Assistant SDK allow an app to send text to be read out?

\n", "Title": "Send Google Assistant text to read out via the SDK", "Tags": "|google-home|google-assistant|", "Answer": "

Yes, the Google Assistant supports Actions which allow developers to respond to requests from users. Actions can only respond if prompted, they cannot speak unannounced at this time.

\n\n

Using the Actions SDK, you must first define some actions:

\n\n
{\n  \"actions\": [\n    {\n      \"name\": \"MAIN\",\n      \"intent\": {\n        \"name\": \"actions.intent.MAIN\"\n      },\n      \"fulfillment\": {\n        \"conversationName\": \"demoApp\"\n      }\n    }\n  ],\n  \"conversations\": {\n    \"demoApp\": {\n      \"name\": \"demoApp\",\n      \"url\": \"https://example.com/demoApp\"\n    }\n  }\n}\n
\n\n

This would use the fulfillment demoApp to respond to the command 'Ok Google, talk to [action name]'. This essentially amounts to a request to the specified URL.

\n\n

You then need to write some server code to handle these requests. Google provide a library for Node.js which might be helpful. I'm just going to quote\nthe example code there as it's sufficiently clear and helpful to point you\nin the right direction.

\n\n
'use strict';\n\nconst ActionsSdkApp = require('actions-on-google').ActionsSdkApp;\n\nexports.<insertCloudFunctionName> = (req, res) => {\n  const app = new ActionsSdkApp({request: req, response: res});\n\n  function mainIntent (app) {\n    // Put your message here, using app.tell.\n    app.tell('Hello, world!');\n  }\n\n  let actionMap = new Map();\n  actionMap.set(app.StandardIntents.MAIN, mainIntent);\n  app.handleRequest(actionMap);\n}\n
\n\n

You then just need to deploy and submit your app. You would need to put your logic to determine what text to say inside\nof mainIntent, or create new intents as necessary.

\n" }, { "Id": "2327", "CreationDate": "2017-11-27T23:30:37.360", "Body": "

There are these devices that you can plug into your car and the insurance company can get real time data to "lower" your insurance cost.

\n

\"\"

\n

Image from U.S.News, 2016.

\n

How do they connect to the internet? Satellites? Mobile network? Searching on Google doesn't give much information:

\n
\n

Once the device is plugged into the car\u2019s computer, it can see all the data the computer collects and it grabs whatever the insurance company has programmed it to find. It then uses wireless technology to transmit that information to the insurance company.

\n

U.S.News., How Do Those Car Insurance Tracking Devices Work?,\n2016

\n
\n

Other than that, how safe are those devices? Are man in the middle attacks possible and can they possibly change the data that are being sent?

\n", "Title": "How do car insurance tracking devices connect to the internet?", "Tags": "|security|communication|", "Answer": "

Most of the telematic devices used by insurance companies use cellular phone devices (mostly using 2G which is fairly commonly used for low cost, low data requirement devices) to communicate with a couple of different sensors such as accelerometer. Most also plug into the OBDII vehicle diagnostics port to collect data on the car as well.

\n\n

From In-Car Sensors Put Insurers In The Driver's Seat:

\n\n
\n

The palm-sized devices plug into a car\u2019s data port, the same spot\n mechanics use for vehicle diagnostics. (All cars made since 1996 have\n the ports.) The devices record information about mileage and speed,\n which is then used to calculate data about acceleration and braking\n trends. Some systems also have GPS capability that is relayed to\n insurance companies for research purposes \u2014 or to owners like Branson\n who opt for driver monitoring.

\n
\n\n

There has been some concern about security expressed, for instance see Progressive Insurance's Driver Tracking Tool Is Ridiculously Insecure. This article has a number of links to other articles and has this synopsis of the Progressive dongle that was investigated.

\n\n
\n

The dongle doesn't use any kind of network authentication to encrypt\n the data, the firmware isn't signed or validated, and it uses the\n infamously insecure FTP \u2013 the same protocol to upload and download\n files from your home server \u2013 to keep the bits flowing.

\n
\n\n

The bottom line so far as this article is concerned is:

\n\n
\n

Instead, it's more proof that security in the era of the Internet of\n Things \u2013 where everything you own is somehow connected \u2013 is woefully\n lacking.

\n
\n\n

See as well Car insurance companies want to track your every move\u2014and you\u2019re going to let them.

\n\n

Since smart phones have a fairly nice sensor package of acceleromater, GPS, etc. a smart phone app can provide much of the information needed by an insurer. See Insurers will now be able to track driver behavior via smartphones.

\n\n
\n

UBI offers the insurance industry new opportunities for tailored\n discount programs. Notably, they can switch from relying OBDII dongles\n plugged into the customer's car and instead use mobile apps that\n travel with the driver, whether he's traveling in his own car or\n another vehicle.

\n
\n" }, { "Id": "2334", "CreationDate": "2017-11-29T02:42:17.730", "Body": "

Not sure if this is the right platform (please let me know any recommendations if not), but I made a display monitor in the Raspberry Pi to pull in information such as news feeds, weather, local news and sports scores. I'm trying to learn more about IoT for an assignment and would like to use my Pi project as an example.

\n\n

As of now, it's just a monitor that pulls in feeds. Does this constitute an IoT device or does the information need to be relayed back somewhere?

\n\n

I'm a beginner, but I was hoping someone might have some recommendations in how I could make this a true IoT device if it isn't currently. Thank you.\"enter

\n", "Title": "Raspberry Pi Monitor: Qualify as IoT device?", "Tags": "|raspberry-pi|", "Answer": "

The core idea of the Internet of Things is to have small computing devices which provide a specific, targeted interface to some thing or event or device. The small computing device collects measurements of various kinds and/or it is used to modify the state of the thing or event or device.

\n\n

A specific example is an automated weather station in which a small computing device has various sensors attached and the computing device samples the sensors and collects the various measurements reported by the sensors and then provides those measurements to some other device.

\n\n

Another would be an automated digital camera that is set up so that when wildlife passes by the camera takes photographs and then sends the photographs to some of device for processing or review.

\n\n

What you have done is no different than what could be done and often is done with a standard workstation. It is not really an Internet of Things application as something similar is available on my Windows 10 desktop computer for instance.

\n\n

I will tell you what I am doing with my Raspberry Pi.

\n\n

My first application was something exactly like what you describe, PiClock is the actual application and the source code is available on the internet. It display an analogue clock along with weather forecast and driving conditions.

\n\n

The next thing I did was to purchase an Osoyoo Raspberry Pi Starter Kit from Amazon. This kit contains a variety of components, sensors, a servo motor, etc. and the manufacturer has projects that start off simple on their web site. I am using this to learn the basics of electronics for IoT devices.

\n\n

I have started a GitHub repository so that I can keep my notes and source code in a safe place. https://github.com/RichardChambers/raspberrypi/

\n\n

Once I have gained some degree of knowledge concerning the electronics of IoT I am going to take a look at some of the projects that others have done and give one a shot.

\n\n

Here is a list of resources on IoT projects for Raspberry Pi:

\n\n

10 Raspberry Pi Projects For Learning IoT

\n\n

hackster.io projects for Raspberry Pi

\n\n

IoT Projects based on Raspberry Pi, Arduino, ESP8266, etc.

\n\n

100+ Ultimate list of IoT projects for engineering students

\n" }, { "Id": "2341", "CreationDate": "2017-11-30T22:45:53.993", "Body": "

I'm struggling at the moment with reversing the commands between a Wi-Fi Plug ( Amazon-Link for Wi-Fi Plug) and the associated phone app. It looks like the app controls the plug via a IPDC (IP device control) packet. This protocol seems to be used for VoIP and telephone networks. Did somebody ever came across this protocol on IoT Smart Home devices?

\n\n

\".28

\n\n

In the screenshot .28 is the phone and .78 is the Wi-Fi plug.

\n", "Title": "Wi-Fi Plug uses IPDC (Internet Protocol Device Control) for controlling via App?", "Tags": "|smart-home|wifi|protocols|smart-plugs|", "Answer": "

That's a very strange protocol combination for a smart plug to use. SS7 and IPDC are\u2014as you already point out yourself\u2014protocols for VoIP communication. I can't say that I've come across those two in particular in the IoT environment. There might be one reasoning behind that however. Do keep in mind that the following is just a guess one that makes certain sense. But I can't offer any proof for this.

\n\n

Since smart plugs don't care about any of the usual low power considerations of IoT protocols since they are mains powered they might choose their protocols for other reasons. One reason that seems to be likely when VoIP protocols are used (besides arbitrary picking by the developer because (s)he knows it) is quality of service. Almost any router will prioritize via QoS VoIP packages. That reduces latency and increases reliability for the application on top. Maybe they tried to game that system with the plug.

\n" }, { "Id": "2351", "CreationDate": "2017-12-04T18:37:20.713", "Body": "

I have some needs

\n\n\n\n

Does a read only deadbolt exist, or are all smart locks motor equipped to allow them to be opened remotely?

\n", "Title": "Smart deadbolt without motor -- Read-only Access", "Tags": "|hardware|google-home|", "Answer": "

So after reading Helmar and Ghanima's excellent answers, I doubled down and did some research. After a little bit of work and research I came up with the following solution. Images are available here, in an Imgur album.

\n\n
    \n
  1. Purchase a flush mount reed switch and a z-wave dry contact sensor from Amazon

  2. \n
  3. Drill a hole in the back of the deadbolt cavity to place the flush mount reed sensor

  4. \n
  5. Pull the wires through the side of the door casing

  6. \n
  7. Drill a small cavity in the deadbolt itself and place a magnet in it (I used a bucky ball)

  8. \n
  9. Wire the reed switch to the dry contact sensor

  10. \n
  11. Bang the door casing back together

  12. \n
  13. Use packing tape or glue to secure magnet in deadbolt.

  14. \n
\n" }, { "Id": "2352", "CreationDate": "2017-12-05T05:33:35.267", "Body": "

I'm trying to connect my Belkin Wemo smart bulbs to my Google Home (really my Google Assistant via the Google Home app).

\n\n

\"Wemo

\n\n

I'm following the steps lists on Belkin's Support Site but after I click \"Ready to Verify\" (step 4), it asks for the name of my network and MAC address. After a little research I found them but it's saying I've entered the wrong details.

\n\n

How do I make Wemo and Google play nice?

\n\n
\n\n\n", "Title": "How do I connect Wemo Bulbs to Google Home", "Tags": "|google-assistant|wemo|", "Answer": "

Despite what Belkin says on their website, Wemo and Google Home aren't fully compatible.

\n\n

\"Belkin

\n\n

Some Wemo smart home products are natively compatible with Google Assistant / Home. These include:

\n\n\n\n

It does not include Belkin Wemo Bulbs or anything not on that list. It doesn't say this on the website, but after consulting with their helpdesk (for hours), that is what I was told.

\n\n

But I already spend hundreds of dollars on Belkin Wemo Bulbs...

\n\n

Yeah, me too. There are a couple of work arounds.

\n\n

Option 1 - Get a Samsung SmartThings hub.

\n\n

Apparently if you use a SmartThings hub you can control your Wemo \"Smart\" Bulbs natively from the Google Assistant. I didn't test this out because I'm already too committed to this and don't want to spend more money on reversing the future proofing of my home.

\n\n

Option 2 - The smart/free option: Use IFTTT

\n\n

Using IFTTT you can control your Wemo Bulbs. And IFTTT is natively supported by Google. Yeh.

\n\n

Other options

\n\n

A better option is to avoid Belkin like the plague and stick to Philips or Samsung for your smart lighting.

\n" }, { "Id": "2359", "CreationDate": "2017-12-07T15:07:33.733", "Body": "

I am always interested in Internet of Things applications in agriculture, and I recently read about the MooMonitor on independent.ie. They claim that the MooMonitor can keep tabs on cow health, fertility, and heats.

\n\n

My question is, how does the MooMonitor keep tabs on a cow being in heat? Is it simply physical comportmental difference from the norm that it is detecting, or is there something else it actually measures, like body temperature? Also, does it take the 3 week cycle into account to help predict when to look for differences in whatever signs it is looking for?

\n", "Title": "How does the MooMonitor detect bovine heats?", "Tags": "|agriculture|", "Answer": "

The MooMonitor use accelerometers to monitor the physical activity level of a cow, using their measurements to determine if the cow is in heat or not. It appears that research shows that cows entering estrous become more active than normal.

\n\n

See Dairymaster MooMonitor: The app for heat detection & results of on-farm studies.

\n\n

Also see MooMonitor is a real cash cow, which says:

\n\n
\n

Explaining the MooMonitor, a device that sits around a cow\u2019s neck and\n uses accelerometers to tell if the cow is in heat, Harty said: \u201cThere\n are lots of jokes that go around the place about alternative uses for\n it, but basically in order to produce milk farmers need to be\n producing calves and that\u2019s why the fertility cycle is so important to\n milk production. There is a narrow window of opportunity that farmers\n need to get right.

\n \n

\u201cBelieve it or not, we were inspired by the technologies the military\n put in torpedoes and rockets to hit targets \u2013 the accelerometer\n technology we take for granted in phones today \u2013 to quantify cow\n behaviour. When the cow is in heat they tend to be more active so we\n have algorithms built in that watch for changes in behaviour.\u201d

\n
\n" }, { "Id": "2366", "CreationDate": "2017-12-09T03:02:52.153", "Body": "

To setup Wio Node, I installed the Wio Android app.\nThe procedure doesn't end when I try to connect Wio Device like the image.

\n\n

\"Wio

\n\n

After that I installed wio-cli and a driver for Mac.

\n\n

Then logged in.

\n\n
 % wio login\n1.) Global Server (New)[https://us.wio.seeed.io]\n2.) Chinese Server [https://cn.wio.seeed.io]\n3.) Customize Server\n? Please choice server: 1\n? First get wio user token from https://wio.seeed.io/login\n? Then enter token: xxxxxxxxxxxx\n> Successfully completed login! Check state, see 'wio state'\n
\n\n

and run wio setup.

\n\n
 % wio setup\n> Setup is easy! Let's get started...\n\n! PROTIP: Hold the Configure button ~4s into Configure Mode!\n! PROTIP: Please make sure you are connected to the Server\n\n? Would you like continue? [Y/n]: y\n0.) Wio Link v1.0\n1.) Wio Node v1.0\n? Please choice the board type: 1\n\n! PROTIP: Wireless setup of Wio!\n! PROTIP: You need manually change your Wi-Fi network to Wio's network.\n! PROTIP: You will lose your connection to the internet periodically.\n\n? Please connect to the Wio_* network now. Press enter when ready: y\n? Would you like to manually enter your Wi-Fi network configuration? [y/N]: y\n> Please enter the SSID of your Wi-Fi network: TANEMAKI\n> Please enter your Wi-Fi network password (leave blank for none): ilovemoguko\n> Please enter the name of a device will be created: winnode_tetsu\n> Here's what we're going to send to the Wio:\n\n> Wi-Fi network: xxxxxxx\n> Password: xxxxxxxxx\n> Device name: winnode_ironsand\n\n? Would you like to continue with the information shown above? [Y/n]: y\n
\n\n

But wio list shows no device.

\n\n
% wio list\nNo Wio devices could be found.\n
\n\n

What am I doing wrong? What can I do to solve the problem?

\n", "Title": "Wio Node setup failed", "Tags": "|hardware|", "Answer": "

Finally I found out the cause of error.

\n\n

I was trying to connect to 5.0GHz wifi network, but Wio node can only connect 2.4GHz network.

\n\n

I just need to connect 2.4GHz WiFi. That's all.

\n" }, { "Id": "2374", "CreationDate": "2017-12-10T05:20:47.813", "Body": "

While playing with my Google Home I discovered apart from \"Okay Google\" and \"Hey Google\" my Home responds to \"Okay Doodle\" and \"Hey Doodle\".

\n\n

Officially, \"Okay Google\" and \"Hey Google\" are documented wake words. Is there an alternate list of wake words, even if unofficial?

\n", "Title": "Is there a known list of wake words for Google Home?", "Tags": "|google-home|google-assistant|", "Answer": "

While watching JoJo's Bizzare Adventure - Golden Wind - Episode 22 "G in Guts" at 18:28 in a character says "So Bucciarati" this activated my google home. I tested and replayed it and even said it and it consistently works.. However it must be pronounced correctly to work.

\n" }, { "Id": "2381", "CreationDate": "2017-12-11T20:12:29.483", "Body": "

I'm building an Alexa skill, and I have a slot called 'Name' where I want to capture a name. I want to find in the database the matching name. But let's say that I have in the database a name \"Alex Baumgartner\". But Alexa returns in a slot \"Alex Baugartner\".

\n\n

Obviously, it doesn't match exactly but it matches with 0.95 probability. How can I check this probability or in some way verify that the voice input matches with the database record?

\n\n

Is there a service online, a tool or algorithm for checking the probability of matching words that I should use, or is there another approach?

\n", "Title": "How can I match database records (e.g. names) with voice input from an Alexa Skill?", "Tags": "|alexa|amazon-echo|voice-recognition|", "Answer": "

Not an Alexa specific answer, but look into support for soundex and similar phonetic hashing systems in your platform and/or database. For example the MySQL database has a soundex() function that can be used for this. BMPM is another algorithm supported out of the box by Apache Solr/Lucene, along with a number of others.

\n\n

https://lucene.apache.org/solr/guide/6_6/phonetic-matching.html

\n" }, { "Id": "2382", "CreationDate": "2017-12-12T11:29:39.443", "Body": "

I recently got my hands on an Echo Dot.
\nI'm hesitating to install it, since I'm concerned about my privacy. \nAccording to Amazon's privacy notice, they may use all data they capture.

\n\n

I've noticed that the Amazon Echo comes with a mic mute button, which would be perfect for cutting down on voice data. But since Alexa is closed-source, I wouldn't be convinced that this button will keep my mic off under all circumstances.

\n\n

Is the Echo mic mute button a software or hardware kill switch?

\n\n

My searches didn't turn out much, mainly because the web is filled with low-quality news and non-technical articles.

\n", "Title": "Is the Amazon Echo mic mute a hardware switch?", "Tags": "|alexa|amazon-echo|privacy|microphones|", "Answer": "

According to Jeff Bezos, it's a hardware button, and various sources seem to agree from the teardowns

\n\n

A forum post at the EEVblog forums quotes a video featuring Jeff Bezos, the founder of Amazon:

\n\n
\n

In this video about Jeff Bezos being interviewed by Walter Isaacson at around the 6 min mark, Bezos claims the mute button on the Amazon Echo is physically connected to the mic amplification circuit, making it impossible to enable again via software.

\n
\n\n

This is also supported by a reddit thread in which it is said that \"Basically it is a physical analog connection that cuts off circuit flow to the mic.\" Another commenter added:

\n\n
\n

No voltage to mics when mute is on. You're correct as well about the state of mute being software controllable. That said, the state of the LED under the button is tied electrically to if the mics are on (same circuit), so there's no possible way the mics can be powered without you knowing it.

\n
\n\n

That said, those sources aren't particularly clear on which models they're referring to. Taking a further look at the teardown linked in the forum post may be interesting to verify this.

\n\n

Another source that supports the 'hardware button' theory is the Apple Insider site, which discusses a previous Echo vulnerability. It notes that:

\n\n
\n

Despite gaining access to the \"always-on\" microphone, the hack cannot get around the physical mute button on the device, which disables the microphone completely. This switch is a hardware mechanism that cannot be altered with software, though it is feasible that with extra work this button could be physically disabled by a determined attacker.

\n
\n\n

Disappointingly, the iFixit teardown doesn't include a good image or any commentary on the mute button circuitry for the proper Echo device. Even so, there's a substantial amount of evidence that it may be a hardware button after all.

\n" }, { "Id": "2389", "CreationDate": "2017-12-13T03:43:36.160", "Body": "

I'm new to the world of IoT, I don't even have an Arduino or similar board yet... but I am very interested, especially with cryptocurrency such as IOTA making such things potentially profitable. Sadly, this also seems like a really new field too, because I have been unable to find any resource, such as a tutorial, for getting an Arduino or similar device to be able to accept IOTA as payment to access some sort of sensor on it. My questions are:

\n\n
    \n
  1. Can Arduino, Raspberry Pi or some other board be programmed and has the sufficient specs to do this. If so, which?
  2. \n
  3. Are there any guides, tutorials, \"Hello World\" or other such \"Get Started\" guides to help implement such a thing? Anything in this sphere would probably be helpful to me as a true newb to this stuff.
  4. \n
\n", "Title": "IOTA on Arduino or Raspberry Pi or Similar Board?", "Tags": "|raspberry-pi|arduino|iota|", "Answer": "

To interact with IOTA, you must be running or have access to an IRI node, which usually has its API exposed. IOTA is still in a heavy development phase, and an embedded linux device such as Raspberry Pi doesn't have the resources to run an IRI node. The open source Ruuvi tag is a good example of IoT device pushing data onto the IOTA tangle. The tags are basically bluetooth enabled sensors, and will usually be tethered to a RaspPi. The Raspberry Pi then relays this data to a public IRI server.

\n" }, { "Id": "2392", "CreationDate": "2017-12-13T16:17:01.337", "Body": "

In Australia, the SmartThings hub isn't currently on the market:

\n\n
\n

Not yet released in Australia, Samsung\u2019s SmartThings platform promises loads of great home automation.

\n
\n\n

If I have some products in mind that use ZigBee and Z-Wave and want to benefit from the features of SmartThings, can I order a SmartThings hub from another country, like the US, and then use it in Australia?

\n\n

Will Australian smart devices be compatible with this imported hub, or will I need to import the devices too?

\n", "Title": "Can I use a US or UK SmartThings hub in Australia?", "Tags": "|samsung-smartthings|zwave|", "Answer": "

I know this is an older thread, but thought this may be helpful.

\n\n

Technically, only the Z-WAVE portion of SmartThings is illegal in Australia.\nThe band used is unlicensed, but still managed under legislation.

\n\n

The band does not interfere with any licenced bands, such as mobile or TV, but can interfere with Radar and the like and as there is no ability for the overseas hub to monitor this, it would make you liable for interference.

\n\n

Zigbee and IP protocol use 2.4GHz same as WiFi so they are perfectly legal and usable in Australia.

\n\n

You have two options. Both include disabling the inbuilt Z-Wave in SmartThings.

\n\n

Option 1) Buy only Zigbee or IP based products.

\n\n

Option 2) buy a Z-Wave USB dongle on the Australian Frequencies.

\n\n

Also note that SmartThings and Samsung will not provide any technical support to Australia.

\n" }, { "Id": "2394", "CreationDate": "2017-12-13T19:35:53.977", "Body": "

I'm doing some very basic automation - lights, cameras, motion sensors. My key requirement is to be able to turn certain lights on or off based on motion and time of day, with the ability to add more complex logic later via custom programming.

\n\n

I bought some products without much planning, just based on individual product reviews and recommendations, and ended up with 3 hubs - NetGear (Arlo cameras), Lutron Smart Bridge (light dimmer), SmartThings (motion sensors). I also have an Amcrest camera because one area requires TPZ functionality for proper monitoring. Lutron integrates with SmartThings but has to go through the Lutron Bridge. I also bought a TP-Link device advertised as \"no hub required\", only to find out after delivery that the advanced functionality requires yet another hub.

\n\n

The excessive number of hubs is insane and I'd like to consolidate everything down into one hub (or even better, zero hubs, as my Wi-Fi mesh has better coverage throughout the house than any of the proprietary networks).

\n\n

I'm still within the return window for most of the stuff I bought, so the cost of switching some of the products is reduced. Is getting down to one hub possible at this time? I'm really intrigued by the new Echo Hub with ZigBee support.

\n", "Title": "Reducing the number of Hubs in a DIY Smart Home Automation project?", "Tags": "|smart-home|samsung-smartthings|zigbee|zwave|tp-link|", "Answer": "

Unfortunately, it doesn't look like your goal of just one hub is achievable. According to the Arlo documentation:

\n
\n

When do I need a base station?

\n

You need a base station to connect Arlo Wire-Free and Arlo Pro Wire-Free cameras.

\n

You don't need a base station to connect Arlo Q and Arlo Q Plus cameras. They connect directly to your Wi-Fi router.

\n
\n

It is confirmed in the support forum that the base station is required unless you have an Arlo Q camera.

\n

It's a similar case with the Lutron Caseta: they can't integrate directly with your SmartThings hub and the Lutron SmartBridge is needed.

\n

So, as you suspect, there is no way to get rid of the hubs, however ludicrous it may seem. If this isn't palatable for you, it would seem that returning them is the only option, unfortunately, and you'll have to take a look at replacements.

\n

Since you expressed interest in the Echo Plus, here is the list of compatible devices from Amazon. As you can see, the list isn't huge, but it does vastly simplify your setup if you are able to use them. Interestingly, there aren't any switches listed, so you might have to resort to at least one hub like the SmartThings hub which tends to have pretty good support for many ZigBee and Z-Wave devices.

\n

It seems that great care is needed to avoid having dozens of hubs in your house \u2014 and many home automators notice this. It's simply a case of competing standards...

\n" }, { "Id": "2409", "CreationDate": "2017-12-18T03:05:02.430", "Body": "

Situation

\n\n

I need to access an ESP8266's Wi-Fi local server from outside.

\n\n

Like Xiaomi Yeelight (YeeLight Introduction Web Site Link) or LOHAS LED \n (LOHAS LED Web Site), I have to control it from outside not in same Wi-Fi.

\n\n

I can only think of port forwarding, but I don't think Yeelight uses port forwarding (just my opinion).

\n\n

I don't know whether YeeLight forces one to activate port forwarding or not using port forwarding, but I wonder how YeeLight can control the light bulb from the outside.

\n\n

Question

\n\n
    \n
  1. In order to control Wi-Fi IoT product like YeeLight, I have to create local server which can control Wi-Fi IoT product's GPIO. Is it right?

  2. \n
  3. If question 1 is right, how I can access to Wi-Fi IoT product's local server from outside without port forwarding?

  4. \n
  5. Is there any way to force to activate port forwarding in end-point (not in router)?

  6. \n
\n", "Title": "Is there any way to access local server from outside without port forwarding?", "Tags": "|networking|wifi|communication|esp8266|", "Answer": "
    \n
  1. You have to provide an endpoint of some sort to allow control over a given device.

  2. \n
  3. Port forwarding is not the only option, the device could connect out to a publicly accessable server on the internet, once this connection is created then commands can be sent via this to the device. This is how many IoT devices work. Example protocols used for this include MQTT, but long poll HTTP is also an option.

  4. \n
  5. Look at something called UPnP, this is a way for devices to request a router sets up a specific set of port forwarding rules for a given device.

  6. \n
\n" }, { "Id": "2411", "CreationDate": "2017-12-18T06:49:53.673", "Body": "

I have a room where the lights are controlled by a Lutron Caseta switch connected to a Lutron SmartBridge and Amazon Alexa. UPDATE: I don't require the use of Alexa in the solution, I'm just saying I have it, in case it helps.

\n\n

What is the simplest way (with the least amount of additional hardware) to automatically turn off the lights when the room is vacant?

\n\n

Additional info

\n\n\n\n

Some quick research on the Internet indicates Lutron motion sensors don't work with Lutron Caseta or SmartBridge and the most popular way to implement this requirement is with a Samsung SmartThings hub and Samsung motion sensor, using a cloud-to-cloud connection from SmartThings to the Lutron SmartBridge. Is there a simpler solution?

\n", "Title": "Lutron switch - turn off lights when nobody is in the room?", "Tags": "|smart-home|alexa|sensors|samsung-smartthings|", "Answer": "

I finally implemented this using the Samsung SmartThings hub and Samsung motion sensor, using a cloud-to-cloud connection from SmartThings to the Lutron SmartBridge. Did not find a simpler solution.

\n" }, { "Id": "2427", "CreationDate": "2017-12-19T17:32:48.400", "Body": "

I want to get started implementing IoT stuff. I do not have experience with it so far, but am learning fast.
\nI am imagining my setup as follows:

\n\n

Proposed Setup

\n\n

Different wireless sensors (Temperature, Humidity, ...) should be connected to an IoT Gateway via BL (or BLE). The IoT Gateway should push the sensor information to an Open-Source IoT Platform - probably via Wi-Fi & MQTT. The Open-Source IoT Platform should feature a Rule Engine & expose a REST API.

\n\n

Hardware

\n\n\n\n

Questions:

\n\n\n\n

I was thinking about \"thingworx\", as it is kind of the biggest one. \"Kaa\" does not have a rule engine as far as I read. \"thingboards.io\" also looks really nice from what I can see.
\nWhat steps do I need to take to implement this? How do I actually do this?

\n\n

While these questions might be rather specific, keep in mind, I have absolutely no idea about this stuff. I don't own anything besides the Raspberry Pi 3B - which I won at a hackathon and haven't used so far.

\n\n

Once I have the information in the IoT Platform and can access it with REST (or can publish it from the platform to an MQTT Broker), I will be able to implement my application. Ideally the setup would allow me to change the IoT Platform with minimal effort. I mean that's what standards (IoT Gateway, Bluetooth, MQTT) are for, right?

\n\n

P.S.: IoT Gateway and IoT Platform tags are missing

\n", "Title": "IoT Setup: Bluetooth Sensor -> IoT Gateway -> IoT Platform", "Tags": "|raspberry-pi|bluetooth|arduino|", "Answer": "

There a million ways to skin this cat. Best for you at this stage to just try to get something working. Beyond that you can worry about \"right\" and \"suitable\" ways.

\n\n

Your setup is feasible and makes sense.

\n\n

Maybe you're running into trouble because you're expecting to find some software off the shelf? Since the DHT22, the Uno, the HC-05 and the Pi are all disparate devices with their own particular requirements, it's highly unlikely you'll find something plug and play. That's okay, it just means you have to write the glue software yourself.

\n\n

Consider each of the interfaces from sensor to cloud and tackle each one in turn. Start by getting the Uno polling the DHT22 for a value. Then get the Bluetooth comms working. Then the Wifi and MQTT. Fire up Thingworx and ingest some MQTT packets. Then put it all together. You'll find lots of examples of each bit, so concentrate on one at a time.

\n" }, { "Id": "2454", "CreationDate": "2017-12-27T08:24:23.590", "Body": "

We are working on some Embedded device (Tag, the Tag hardware is: MCU, external EEPROM, Temperature and Humidity sensors) that collects temperature and humidity, the Tag save the samples in the EEPROM, the connection between the MCU and the EEPROM is I2C.

\n\n

My questions is:

\n\n

Is there a way to secure the EEPROM that only my MCU is able to connect and extract logged data? \nMy concerned is that someone connects a DevelopmentKIT with an I2C driver to the Tag EEPROM and extracts the data.

\n", "Title": "Secure EEPROM Reading from Another Device", "Tags": "|security|flash-memory|", "Answer": "

You can make things a little bit complicated for others by disturbing the I2C bus in some way. Lock the bus from other I2C masters by your controller, but this is not much as anyone can easily remove your MCU from the board and free the bus.

\n\n

I recommend to remove the markings from the EEPROM's package, this will make harder to interface the EEPROM with another MCU, but not impossible. Also you should enable read-out protection on your MCU so they cannot use your firmware to reverse-engineer the interface of the EEPROM.

\n\n

But all of the above mentioned items are just tricks, none of them offer real protection. If you want real protection you should use encryption.

\n\n

You did not mentioned why the data cannot be encrypted, so I assume that your MCU is not able to handle such task.

\n\n

So you could consider using secure EEPROM, such as Atmel's CryptoMemory family.

\n\n
\n

CryptoMemory is designed to keep contents secure,\n whether operating in a system or removed from the\n board and sitting in the hacker\u2019s lab.

\n
\n\n

Here is an overview and a chip datahseet. It offers four levels:

\n\n
    \n
  1. First Option: No Security
  2. \n
  3. Second Option: Password Protection
  4. \n
  5. Third Option: Authentication
  6. \n
  7. Fourth Option: Data Encryption and MACs
  8. \n
\n" }, { "Id": "2464", "CreationDate": "2017-12-28T15:21:51.453", "Body": "

This question is related to this one, I created the code below:

\n\n
import paho.mqtt.client as mqtt\nimport time\nimport serial\n\n# Open grbl serial port\ns = serial.Serial('COM3',9600)\n\n# Wake up grbl\ns.write(\"\\r\\n\\r\\n\")\ntime.sleep(2)   # Wait for grbl to initialize \ns.flushInput()  # Flush startup text in serial input\n\nf = \"\"\"\nO1000\nT1 M6\n(Linear / Feed - Absolute)\nG0 G90 G40 G21 G17 G94 G80\nG54 X-75 Y-75 S500 M3  (Position 6)\nG43 Z100 H1\nZ5\nG1 Z-20 F100\nX-40                   (Position 1)\nY40 M8             (Position 2)\nX40                    (Position 3)\nY-40                   (Position 4)\nX-75                   (Position 5)\nY-75                   (Position 6)\nG0 Z100\nM30\n\"\"\"\n\n# Runs when the client receives a CONNACK connection acknowledgement.\ndef on_connect(client, userdata, flags, result_code):\n    print \"Successful connection.\"\n    # Subscribe to your topics here.\n    client.subscribe(\"hi\")\n\n# Runs when a message is PUBLISHed from the broker. Any messages you receive\n# will run this callback.\ndef on_message(client, userdata, message):\n    if message.topic == \"hi\":\n        if message.payload == \"run\":\n\n            # Stream g-code to grbl\n            for line in f:\n                l = line.strip() # Strip all EOL characters for consistency\n                print 'Sending: ' + l,\n                s.write(l + '\\n') # Send g-code block to grbl\n                grbl_out = s.readline() # Wait for grbl response with carriage return\n                print ' : ' + grbl_out.strip()\n\n                # Close file and serial \n                s.close()\n            # You could do something here if you wanted to.\n        elif message.payload == \"STOP\":\n            # Received \"STOP\". Do the corresponding thing here.\n            # Close file and serial \n                s.close()\n            print \"CNC is stopped.\"\n\nclient = mqtt.Client()\nclient.on_connect = on_connect\nclient.on_message = on_message\n\nclient.connect(\"iot.eclipse.org\", 1883, 60)\nclient.loop_start()\n
\n\n

But when I run it I got nothing but Successful connection. and then the code ended. Can you please tell me what's going on?

\n", "Title": "The MQTT Paho Python code doesn't work properly", "Tags": "|mqtt|paho|", "Answer": "

Change the last line from

\n\n
client.start_loop()\n
\n\n

To

\n\n
client.loop_forever()\n
\n" }, { "Id": "2475", "CreationDate": "2017-12-30T18:00:04.337", "Body": "

There was this recent question in which the OP used iot.eclipse.org as an MQTT broker. This server actually runs the latest released version of Mosquitto broker.

\n\n

I have advised to check the connection on the broker side and I have checked what are the possibilities. I could not find a way to access the logs provided by iot.eclipse.org's Mosquitto broker.

\n\n

Anyway does someone know a way to get information about my client from a public broker?

\n", "Title": "How can I get access to a public MQTT broker's (like iot.eclipse.org) logs?", "Tags": "|mqtt|mosquitto|eclipse-iot|", "Answer": "

Subscribing to $SYS/# topic will provide some information about the broker and maybe about the clients. Detailed description of these items can be found here.

\n\n

There are three main categories to highlight:

\n\n
\n \n
\n\n

The one needed to for client status check falls into the \"Optional Topics\" category.

\n\n\n\n

Also, based on this description about Mosquitto's logging, the console log can be logged into a topic ($SYS/broker/log/#) as well.

\n\n
\n

Two common question are:

\n \n \n \n

The broker doesn\u2019t let you do this directly but by enabling logging to\n a a topic and monitoring the topic with an MQTT client you can get a\n good idea.

\n
\n\n

Probably for privacy and/or security reasons iot.eclipse.org does not have topics for these log entries.

\n\n\n\n

For other public brokers these optional topics may exist, you can check them easily with mqtt-spy for example.

\n" }, { "Id": "2480", "CreationDate": "2018-01-02T02:55:57.667", "Body": "

I have been working on an MQTT protocol using the SIM5320. I am familiar with the AT command documentation, and have a working implementation with an Arduino.

\n
    \n
  1. I open a network socket with AT+NETOPEN
  2. \n
  3. Then I open a TCP connection with AT+CIPOPEN=0,"TCP","ip address",port.
  4. \n
  5. I then transmit data for the MQTT protocol using AT+CIPSEND, which executes successfully.
  6. \n
\n

If I send data to the SIM module through MQTT, it is also received and the message is detected.

\n

With MQTT, there is a Keep-Alive interval which specifies how long the server will keep a connection open between communication, basically how long the client can idle before being forcibly disconnected from the server. However, I have set this value to the maximum of 18 hours, which is far longer than the ~15 minute disconnections.

\n

My issue arises after ~15 minutes, when I try sending a command to the server, and no response is given. The SIM has not issued a "+IPCLOSE: 0,4", which usually occurs when the server forcibly disconnects the client, or any other sort of indicator.

\n

Additionally, I am still able to send data and it appears that the CIP connection is still open, as indicated by AT+CIPOPEN?. When I try and close the connection with AT+CIPCLOSE=0, I receive +CIPCLOSE: 0,4 and "ERROR". There is no mention of what +CIPCLOSE: 0,4 means in the documentation, however it does not seem to close the connection, as it cannot be opened or used.

\n

I would really love to know what is happening in this 15 minutes, between establishing a connection and sending data, to attempting to send data again. There is no alert or any indication of anything going wrong, so I am seriously confused.

\n

I initially asked this question on Electrical Engineering stack exchange, but was advised to ask it here as well.

\n

I've attached the code I wrote here for anyone who would like to take a look, and there aren't any libraries you need to run it.

\n", "Title": "SIM5320 MQTT TCP connection closing unexpectedly after time", "Tags": "|mqtt|networking|", "Answer": "

Try enabling TCP keep alive with a shorter period than 15 minutes, you should be able to enable TCP keep alive from your server and/or from your device (SIM5320), enabling either one should solve your problem.

\n

In the SIM5320 you can use AT+CTCPKA to enable TCP keep alive.

\n

In your server, enable socket.SO_KEEPALIVE

\n

This is transparent to the TCP user (you), so it should not interfere with your application, as opposed to the accepted answer.

\n" }, { "Id": "2484", "CreationDate": "2018-01-02T20:38:20.090", "Body": "

I'm working in a IoT project and my role is to provide APIs to allow devices to communicate with server/database.\nI have decided to use MySql for relational data like user details etc.. and MongoDb for other data like device data (Multi tenant data), and PHP Laravel framework for API development.

\n\n

I just want to know if this is good combination of Language, Framework and Databases? if not then what is the good combination of Language, Framework and Database, as per my requirement.

\n\n

As of now devices are limited but in future it may goes into thousands and can send data frequently (Once in a minute or once in 5 seconds). Most of the devices will be running on low power and memory, ex., Temperature, Humidity, Heart Beats sensors. Same time APIs will be used for web portal as well. End of the day Rest APIs will get used by Devices, WebPortal, MobileApp.

\n\n
    \n
  1. Is it good to use MongoDb+MySql over other databases like PostgresSQL? or any other suggestions.
  2. \n
  3. Is it good to use two different platforms for API development, one for devices (MQTT+Mosquitto) and another for web portal and mobile app (Laravel). Or any suggestion to make this system better.
  4. \n
  5. What programming language would you suggest in open source?
  6. \n
\n", "Title": "Recommendation to choose API development open source Language, Framework and Database", "Tags": "|mqtt|web-services|open-source|rest-api|", "Answer": "

If you want to use a RESTlike Environment, you can setup:\nPHP CodeIgniter + MariaDB\nBut you need to reconsider your requirements. Using REST over HTTP may require the use of intermediate techniques and a webservice based implementation.\nIf you are planning a real-time application, it would be better to use a telemetry messaging protocol, as MQTT.

\n\n

In our IoT Lab we're implementing this configuration for a real-time solution:

\n\n\n\n

The IOT devices are \"Adafruit Feather Huzzah\" ESP8266, using arduino PubSubClient.h

\n\n

The Mosquitto MQTT Broker has not (by now) any security configuration, and every port of MongoDB and MQTT is set by default.

\n\n

Should work out of the box by setting up the data collections and documents on \"application.conf\" on Izmailoff's GitHub.

\n" }, { "Id": "2503", "CreationDate": "2018-01-06T11:44:50.173", "Body": "

I designed an authentication protocol for IoT. Now I want to emulate my protocol with the Cooja simulator, in order to test power consumption.

\n\n

My scheme is gateway-based, so I used the RPL-Border-Router sample as gateway. and some motes as sensors. I used the Tunslip tool to connect my network to internet.

\n\n

Now I want to implement my client, so that it communicate to my RPL network in order to key agreement.

\n\n

As I searched I found that the client should be on a Firefox plugin Copper. (Is there any other way to implement client? if yes, what is that?)

\n\n

As I told before in my own protocol, there is key agreement phase between client and Cooja motes. But I have problem to handle this phase in Copper. It seems that the requests and responses from client (Copper) send as a packet (with get, post,..) to Cooja motes. So how to implement packets in copper?

\n", "Title": "How to create external client for cooja?", "Tags": "|contiki|", "Answer": "

Copper actually is a tool for sending CoAP requests; it is a simpler implementation of REST service. If you want to work with another protocol (not HTTP) then you should look for another tool.

\n" }, { "Id": "2517", "CreationDate": "2018-01-08T10:37:20.100", "Body": "

With the increased security risks against IoT based smart homes, many security appliances have been commercialized. These appliances or boxes claim to protect the home network from malware, cyber-attacks, and preserve consumer\u2019s data privacy.

\n

There is a growing list of entrants in this space, including F-Secure (SENSE), BitDefender BOX, and many others.

\n

I would like to know how these boxes work, technically. Is there an open source among them?

\n

Do they simply work like traditional IDS/IPS/Firewall? I am sure there are many differences and the cloud support is one of them. Are there other differences?

\n", "Title": "How do Smart Home Security Boxes work?", "Tags": "|smart-home|security|", "Answer": "

The boxes are Intrusion Prevention Systems (IPSs) that work by monitoring for \"Indicators of Compromise\" (IoC). This would be network traffic that is unexpected in your environment; network traffic that goes to a known bad destination; or network traffic that contains packets consistent with malware.

\n\n

Typically, these boxes come with a subscription. The company selling them sends out frequent updates (daily or more often) that refresh the IoC database. If they discover that some ransomware reaches out to https://ransom.keyserver.evil.example.com, they might immediately add the network address to the IPS blacklist, and publish it to their customers as soon as they can. If you have a device on your network that tries to connect to get a ransomware key, their IPS will break the connection so you don't get infected.

\n\n

Some of these boxes also come with software that maintains an inventory of your devices. You can take a look at all of the little IoT things on your network today, and bless them all. Tomorrow, if it detects there's a new node on your network, it can pop up a warning on your mobile phone that says \"New thing detected on your network, authorize (yes/no)?\" This might help you block someone borrowing your wifi, or hacking into your network.

\n\n

There isn't a direct open source replacement for all of these functions; not because the technology is so special, but because the constant updating of the IoC database requires intel constantly gathered by humans responding to new incidents, and paying a bunch of humans is expensive. You can achieve some of this functionality with an open source IPS system like Snort, but the Snort \"community\" subscription is updated 30 days after their commercial subscription. That's quite slow when today's common threats include 0-day based malware.

\n" }, { "Id": "2528", "CreationDate": "2018-01-10T08:12:30.603", "Body": "

I am planning to start to implement the below IoT use case.

\n\n

Use case

\n\n

The IoT devices will send 100k messages/minute to the gateway via repeaters and the gateway will transfer the messages to the cloud. I want to track the employees in an organization. The sensors will be fixed on their ID card. The sensor sends the location related data (approx. 15KB/message) to the gateway via repeaters. It's for the analytical purpose. After the data passed to the cloud, I'll do some analytics and store into the DB and display on a web page. Based on this analytics data, I'll show the user's current location and also the user's moving locations of a certain passed time span (last 1hr or 2hr or 1day).

\n\n

I'll do some processing over the data and send it to the front end/DB.

\n\n

I have gone through the IoT basics and its architecture. Then I decided to use \"SMACK\" stack (Spark, Mesos, Akka, Cassandra, Kafka) architecture.

\n\n

I decided to use \"Kafka native client\" in the gateway to publishing the messages to the cloud.

\n\n

Should I use MQTT protocol to transfer the message to Kafka? Or MQTT is not needed for the above use case?

\n\n

If yes, what would be the benefit of using MQTT with the \"SMACK\" architecture?

\n", "Title": "Should I use the MQTT protocol?", "Tags": "|mqtt|protocols|data-transfer|", "Answer": "

There is no problem with almost any kind MQTT broker to handle this load, especially for qos=0 (probably in your case) messages. We have constant load to our broker with incoming 100.000 messages (0.5KB) per second (+SSL). The problem may appear from traffic side, not from pps.

\n\n

Regarding architecture of your system my personal advice - try to make it as simple as possible. And simple mean - just a few intermediate components/services. If you can connect directly two services - do it. You will always have the possibility to make it more complex when you will start adding features.

\n" }, { "Id": "2529", "CreationDate": "2018-01-10T10:34:40.210", "Body": "

I have a GPS unit with an IMU based on the Intel Edison I want to used alongside a Raspberry Pi for a robotics project. The general idea is that I want to use the Edison unit to provide the Pi with sensor data (gps, gyro, compass, ++) on a standardised format, and use the Pi to drive the robot itself. This will allow me to add further Edisons in the future, and/or replace them with newer and improved sensors without having to modify the computer driving the robot.

\n\n

However, I'm stuck on how to integrate the two. My initial idea was to use the onboard USB ports for communications, but I don't quite know where to start. \nReconfiguring the USB port to provide ethernet over USB is an alternative if that simplify things.

\n\n

Having the Edison write continuously to the usb port with no other communication between the two is no problem, but there are scenarios where the Pi should send commands to the Edison in order to reset or calibrate one or more sensors or disable them during testing.

\n\n

Is the best option to write a custom driver for this, or would I be better off using ethernet over USB and simply implement a server/client model using TCP?

\n\n

If the USB driver option is the best, where would be a good place to look? This driver would essentially have to run one or more programs/commands on the Edison returning the output to the Pi.

\n\n

Edit: As mentioned USB is the preferred method for connection, as it allows for both data and power to the Edison (driving it from the Pi) so I can avoid having to add a separate power source for the Edison. The messages will ideally be simple json strings going at a rate of approx 100/sec. The Edison is running Busybox and the Pi is on a Debian-based distro. Neither will have access to an external network while running, so they will be limited to USB or Ethernet over USB as there are no other physical connectors found on both units.

\n", "Title": "Connecting Raspberry Pi and Intel Edison", "Tags": "|raspberry-pi|intel-edison|usb|", "Answer": "

Configuring the USB port as Ethernet only complicates things. Not only does is throw the uncommon, proprietary RNDIS protocol into the mix, but then you need to add an IP server/client application on top.

\n\n

The Micro USB connector on the Edison is already configured to provide a bridge to a UART using an FTDI chip. The Raspberry Pi already comes with (equally proprietary, but hugely common) FTDI drivers. The drivers will create a /dev/ttyUSB device on the Pi when the Edison is plugged in that you can talk to like any other serial port.

\n\n

Create a server for the serial port on the Edison that responds to simple commands (or simply periodically sends data) and create a client for the virtual serial port on the Pi that handles the other end.

\n" }, { "Id": "2537", "CreationDate": "2018-01-11T14:19:04.190", "Body": "

I have several raspberry pi zeros connected to agricultural sensors in a large farm(1500+ acres). What is my best solution for getting the agricultural data from them wirelessly? I want to get the data to a central internet connected server which will be approximately 5-8km away from the sensor.

\n", "Title": "Raspberry Pi outdoor connectivity for large farm", "Tags": "|networking|raspberry-pi|agriculture|", "Answer": "

Wow ! 5 to 8 kn is quite a distance.

\n\n

If you are willing to spend in order of US $500 per sensor, then I can recommend a solution using an inexpensive satellite modem. Although this might sound expensive, think of the cost spread over many years, and be aware that in addition to the sensors, you can program the satellite modems themselves, if need be.

\n\n

I have had lots of fun with Skywave\u2019s offerings (they have now rebranded as OrbComm), and you might like to look at the IDP_800.

\n\n

Read the datasheet.

\n\n

Mass: with batteries: 1.3 Kg (with integrated antenna Dimensions : 43.2 cm x 14.7 cm x 2.5 cm and it runs on 6 AA batteries, which can be bought almost anywhere.

\n\n

I send only one 50 byte message a day & they tell me that I can except a three year battery life. Their units generally cost US $500 - $1,000 (with discount for bulk purchases), and their airtime rates are competitive.

\n\n

The device has built in GPDS and is fully programmable in the LUA scripting language.

\n\n

Obligatory picture follows:\n\"enter

\n\n
\n\n

If you are looking for something cheaper, then II would recommend Flutter (see also completed Kickstarter page for more info).

\n\n

It's a US $20 Arduino board with 1kmn wifi range, so you would still need to use repeaters, but you would certainly need fewer than with conventional WiFi. I would suggest a mesh network, which Flutter supports, and would recommend that you mount them on posts, as high above the ground as you can.

\n\n

Whatever solution you go for, don't accept manufacturers claims as to range - start with one Flutter/whatever, and start adding more, checking as you go how good communication is. DO NOT put hem all in place first & then go for a big bang turn on.

\n\n

That's about it really, so now for the obligatory graphic

\n\n

\"enter

\n" }, { "Id": "2542", "CreationDate": "2018-01-12T19:14:34.540", "Body": "

I'm looking at getting a very quick prototype together for a piece of software I'm demoing, and I'd like to be able to say a phrase to the Google Assistant and have it read back a phrase that's been set on the fly from the software.

\n\n

I'm thinking of something along the lines of:

\n\n
    \n
  1. I enter a phrase into my software and click some save button.
  2. \n
  3. My software saves this phrase in a document on Google Drive.
  4. \n
  5. I say a certain phrase to the assistant.
  6. \n
  7. The assistant reads back phrase from the Google Drive document.
  8. \n
\n\n

Just to clarify, I'm only looking for help with point 4. The rest should be straightforward.

\n\n

Is this possible at all? I don't mind a hacky solution as it's just for a short proof-of-concept demo.

\n", "Title": "Google Assistant read custom phrase", "Tags": "|google-home|google-assistant|", "Answer": "

To make this kind of experiment most easy is to use API.ai (a tool acquired by Google just before Google Home was born [1]).

\n\n

In API.ai you can very easily imitate the flow with data that is given with one command and read with another, but with actual Google Drive it is also possible to build the exact flow you have.

\n\n

Your described flow is done like this:

\n\n
    \n
  1. Use an Intent to wake up API.ai and an Action to do sth with the data in Response [1]. In Action you will create a custom Fulfillment with for example Node.js [2] and there..
  2. \n
  3. ..get use of Node.js version of the REST api of Google Drive to handle the document.
  4. \n
  5. Use another Intent in API.ai to generate another Action and call another Fulfillment and to call another action..
  6. \n
  7. ..in Node.js to communicate to other direction and fetch the data from Google Drive REST API and call out the data in Response to the Intent in API.ai.
  8. \n
\n\n

[1] https://www.smashingmagazine.com/2017/05/build-action-google-home-api-ai/#google-actions-and-api-ai

\n\n

[2] https://medium.com/google-cloud/how-to-create-a-custom-private-google-home-action-260e2c512fc

\n\n

[3] https://developers.google.com/drive/v3/web/quickstart/nodejs

\n" }, { "Id": "2553", "CreationDate": "2018-01-03T22:47:52.980", "Body": "

I am using two AWS IoT buttons to increment a scoreboard. The system works, but there is about a 5 second delay from the button being pressed to until the message from the button is actually published to AWS, which makes the scoreboard less responsive than I would like.

\n\n

I'm having trouble finding information about this delay between the initial button press and the message being published. I think I remember seeing in the documentation or on a blog that the delay exists to prevent an accidental double tap from being recorded, but I haven't been able to find where I read that.

\n\n

I have two questions:

\n\n
    \n
  1. Is there any documentation or explanation of this delay that I'm missing?
  2. \n
  3. Is it possible to change this delay? Or is this built in to IoT buttons?
  4. \n
\n", "Title": "How to decrease AWS IoT button press delay before message publishing?", "Tags": "|aws-iot|amazon-iot-button|", "Answer": "

More realistically, this delay encompasses the time to register on the wifi network.

\n\n

In order to minimize power consumption (that's an officially irreplaceable battery) the device is normally completely dormant - it cannot afford the energy cost of maintaining a wifi network connection, and instead only starts trying to obtain one after the button has been pushed and it has traffic to send.

\n\n

Comparatively speaking, five seconds to wake up, authenticate and transmit a message is fairly reasonable.

\n\n

If you want something faster, you'll probably have to look at a different technology for the first \"hop\" from battery to mains powered infrastructure - perhaps propriety 2.4 GHz RF where you can simplify the association process. Or provide a power source which can accommodate a system that maintains a connections even when not being actively used.

\n" }, { "Id": "2557", "CreationDate": "2018-01-15T19:06:16.337", "Body": "

I am trying to connect the LDR sensor to IoT by using GPIO pins (pin 4) of Raspberry Pi kit and publish MQTT messages. I am using a laser as a transmitter and the LDR sensor as a receiver, the output signal from LDR sensor is (0 or 1), if something passed through laser line, the output of LDR sensor will be 1, then the code must publish an MQTT message.

\n

I tried this code:

\n\n
import paho.mqtt.client as mqtt\nimport RPi.GPIO as GPIO\nimport time\nimport ssl\n\nGPIO.setmode(GPIO.BCM)\nGPIO.setup(4, GPIO.IN)\n\n# Define Variables\nMQTT_PORT = 1883\nMQTT_KEEPALIVE_INTERVAL = 60\nMQTT_TOPIC = "ldr"\nMQTT_MSG = "there is a product"\nMQTT_HOST = "iot.eclipse.org"\n\n\n# Define on_publish event function\ndef on_publish(client, userdata, mid):\n    print ("Message Published...")\n\n# Initiate MQTT Client\nmqttc = mqtt.Client()\n\n# Register publish callback function\nmqttc.on_publish = on_publish\n\n\n# Connect with MQTT Broker\nmqttc.connect(MQTT_HOST, MQTT_PORT, MQTT_KEEPALIVE_INTERVAL)\nprint ("Connected Successfully")\nmqttc.loop_forever()\n\n#Main Loop\n\n while True       \n    input_value = GPIO.input(4)\n        if input_value == 1:\n            mqttc.publish(MQTT_TOPIC,MQTT_MSG,qos=1)\n            print ("message published")\n            time.sleep(1)\n
\n

When I run it I only see the printed output text "successful connection" and I am not seeing any of the expected MQTT messages.

\n

Can you please tell me what's wrong with this code? i create the main loop depending on this idea: input_value = GPIO.input(4) should enter the signal if the value of signal is 1 then it should publish the message what's wrong with my code?

\n", "Title": "Program with MQTT hanging after starting, LDR processing is not working with no MQTT messages", "Tags": "|mqtt|raspberry-pi|", "Answer": "

I will post an answer, just to make it clear what your problem is.

\n\n

However, this is not really my answer - you ought to have been able to see it when @hardillb said \"Hint: your Main loop is still never getting executed\".

\n\n

Since it is never getting executed, I looked at the line above it which says mqttc.loop_forever(). Then I Googled for that function ans the 2nd or 3rd answer was Understanding The Loop -Using The Python MQTT Client, which clearly says

\n\n
\n

The loop_forever() method blocks the program, and is useful when the program must run indefinitely.

\n
\n\n

So, obviously the following statement will never be reached. There is your answer.

\n\n
\n\n

What I am not going to do, is to fix your program for you, but I will give you some hints (teach a man to fish).

\n\n\n\n

I hope that this has been of some help, and look forward to seeing you at the various SE sites :-)

\n" }, { "Id": "2560", "CreationDate": "2018-01-17T13:15:24.007", "Body": "

I am working with a RF-LORA-SO 868Mhz module.

\n\n

I read this article that says the range of this module could go up to 50km. But I would need to connect an antenna to my LoRa module. This article I mentioned above doesn't seem to talk much about wiring.

\n\n

Most of these antennas have SMA.

\n\n

How do I connect an antenna like this to my radio module?

\n\n

\"antenna\"

\n", "Title": "Increasing the range of an RF module", "Tags": "|lora|lorawan|", "Answer": "

The data sheet is not too forthcoming on their expectation for connecting an antenna (other than suggesting that a matching network is mandatory if you run high power). However, from the photos in the linked article, it's clear that what was used in this example was a 50 ohm coax to SMA 'pigtail'. You can get a short coax pre-connected to an SMA (with maybe a 2nd connector on the other end which you can cut). Keep the bare wires at the end of the coax where it solders on to the module as short as possible and take care not to melt the insulation (heatsink with pliers).

\n\n

You shouldn't expect optimum performance like this, unless you're lucky (but it may be close).

\n\n

The quoted 50km range seems to be in free-space, to an aircraft. On the ground, you will experience attenuation and reflection (trees, buildings, etc) and these can have a huge impact on range. This goes some way to explaining the difference between the module's data-sheet performance, and the best-case that can be achieved in perfect conditions.

\n\n

You can also sometimes get increased range by using a more directional antenna. This rapidly becomes much more specialist though. I don't know what the LoRa protocol assumes, but radio protocols typically make some assumptions about time-of-flight in a duplex system, and this can also hard-limit the achievable range.

\n" }, { "Id": "2568", "CreationDate": "2018-01-19T16:36:11.493", "Body": "

I'm trying to learn IoT development using an Arduino and Amazon's menu of services\u2014Alexa Skill Kit, AWS Lambda, and AWS IoT. I've been able to get come a long way, but when I think about implementing these for like a fleet of devices, I can't figure out how to approach this problem:

\n\n

For a headless device, how do you link a customer's device with that customer?

\n\n

You can readily get a userID from Alexa whenever a user invokes your Alexa skill, and you can match that in your database to a customer, and potentially match that with a device registered to that customer\u2014but how do you register a device to a customer? Would it have to be like having the customer enter a serial number in a UI somewhere? I had a thought that you could potentially use OAUTH to get a token from, e.g., a customer's Amazon account, send that to the device, and then have the device present both the token and its own identifier to your database. That way you have at least a link between their linked account and the device.

\n\n

Does this sound like a reasonable approach? I haven't been able to find much about connecting particular devices to particular customer accounts, so any links with more information are very welcome.

\n", "Title": "How to link device with user?", "Tags": "|alexa|aws-iot|", "Answer": "

In addition to the two most common methods:

\n\n
    \n
  1. Customer enters serial number printed on device into company portal.
  2. \n
  3. Device exposes WiFi AP for initial registration.
  4. \n
\n\n

is a third method that's not uncommon:

\n\n
    \n
  1. Device forms proximity connection in response to physical trigger.
  2. \n
\n\n

The trigger could be bringing a magnet nearby, tapping the device, shining an IR led into a window, or removing a single-use tab. Whatever the trigger is, it will cause the device to go into a commissioning or registration mode, which makes it responsive to some form of short-range communication. Usually Bluetooth but could be NFC or WiFi. The device is paired to the customer's smartphone or computer via this temporary communications channel, automatically informing it of its unique identify so the customer can complete the process of registering the device.

\n" }, { "Id": "2569", "CreationDate": "2018-01-19T17:31:29.147", "Body": "

I read this article which said I could get upto 50 km with a LoRa module.

\n\n

But when I read the product description it says the in-built range is only 16 km, so I obviously need an antenna. But what kind of antenna can I use that can get my 16 km LoRa module to 50 km?

\n\n

Will something like this work?

\n", "Title": "Is a range of 50 km+ possible in LoRa?", "Tags": "|raspberry-pi|lora|lorawan|", "Answer": "

It seems that a range of at least 440 km is possible with the LoRa protocol (i.e. there is no time-of-flight assumption as in GSM).

\n\n

The correct way to answer this question is by looking at the link budget for your transmit/receive arrangement. Although the basic calculations are simple, knowing the right way to do the calculation is not so simple.

\n\n

To receive a usable signal, the receiver needs a certain signal-to-noise ratio (determined by the tolerable error rate and the modulation characteristics). You may find some online examples of how to calculate this (for LoRa or something similar).

\n\n

Signal comes from transmit power, plus antenna gain, minus free-space loss (the range calculation) minus any shading from non-line-of-sight, plus antenna gain.

\n\n

Noise comes from the receive environment or thermal noise (whichever is greatest) and amplifier noise figure, plus any multi-path interference which is not delay compensated in the receiver.

\n\n

Assuming a 16km range is possible with a simple antenna (spherical uniform radiation), you're asking for a 3.125 times range increase or a 9.77 times increase in power. This is conveniently about 10 dB, so as a rough approximation you need a 5 dB antenna gain above the 'trivial' antenna at each end. If you aim for 7 dB at each end, this gives you a small margin for other factors you've not accounted for, imperfections in your assembly, etc.

\n\n

A further complication is that the quoted 16km range is plausibly within the horizon of an antenna close to the earth, but to achieve 50km line of sight, you would need to raise one or both ends by many metres.

\n" }, { "Id": "2574", "CreationDate": "2018-01-20T18:35:43.667", "Body": "

I want to create a web site to monitor my IoT devices, so I went through many tutorials to achieve that.

\n\n

The most common way that is used in the tutorials is writing to DynamoDB from IoT then using Lambda to invoke the data from DynamoDB to Lambda and finally hosted by S3. But S3 hosted the static web while I need a dynamic web site in order to trigger the data from AWS IoT.

\n\n

Can you please help me with this or show me a tutorial makes same thing?

\n", "Title": "Using AWS Lambda function to create a monitoring website for IoT devices", "Tags": "|aws-iot|aws|", "Answer": "

Lambda is for running tiny functions, not long-running processes.

\n\n

You should have your web page connect directly to AWS IoT using WebSockets. Then it can get messages directly when they happen and display them, etc.

\n\n

If you don't need to store your state, you don't need Dynamo or S3. (Although you may want to use S3 to host the JavaScript/HTML for your application.)

\n" }, { "Id": "2577", "CreationDate": "2018-01-21T09:18:35.567", "Body": "

This question is related to this one after you helped me to fix my mistake I have connected to Eclipse broker, it worked just fine, connection and publishing, then I switched to AWS IoT broker with this code

\n
#!/user/bin/python3\nimport paho.mqtt.client as mqtt\nimport RPi.GPIO as GPIO\nimport time\nimport ssl\nimport _thread\n\nGPIO.setmode(GPIO.BCM)\nGPIO.setup(4, GPIO.IN)\n\n# Define Variables\nMQTT_PORT = 8883\nMQTT_KEEPALIVE_INTERVAL = 45\nMQTT_TOPIC = "ldr"\nMQTT_MSG = "there is a product"\n#MQTT_HOST = "iot.eclipse.org"\n\nMQTT_HOST = "xxxxxxx"\nTHING_NAME = "LDRsensor"\nCLIENT_ID ="LDRsensor"\nCA_ROOT_CERT_FILE = "xxxxxxx"\nTHING_CERT_FILE = "xxxxxxxxxxxx"\nTHING_PRIVATE_KEY = "xxxxxxxxxxx"\n\n\n# Define on_publish event function\ndef on_publish(client, userdata, mid):\n    print ("Message Published...")\n\n# Initiate MQTT Client\nmqttc = mqtt.Client()\n\n# Register publish callback function\nmqttc.on_publish = on_publish\n\n\n# Configure TLS Set\nmqttc.tls_set(CA_ROOT_CERT_FILE, certfile=THING_CERT_FILE, keyfile=THING_PRIVATE_KEY, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1_2, ciphers=None)\n\n\n# Connect with MQTT Broker\nmqttc.connect(MQTT_HOST, MQTT_PORT, MQTT_KEEPALIVE_INTERVAL)\nprint ("Connected Successfully")\n#mqttc.loop_forever()\n\n\ndef publishMessage(Variable):\n\n    while (1):\n        input_value = GPIO.input(4)\n        if input_value == 1:\n            mqttc.publish(MQTT_TOPIC,MQTT_MSG,qos=1)\n            #print ("message published")\n            time.sleep(1)\n_thread.start_new_thread(publishMessage,("publishMessage",))\nmqttc.loop_forever()\n
\n

But what I get is that, I can connect successfully but I can not publish the mesages. Is it because of forever loop or there is some thing else.I tried debug method, on line _thread.start_new_thread(publishMessage,("publishMessage",)) i got [Errno 32] Broken pipe what is that mean and how i can fix it?

\n

should i install AWSIoTPythonSDK?

\n", "Title": "My thing connected to AWS IoT but it does not publish messages", "Tags": "|aws-iot|aws|", "Answer": "

I finally figured out what my mistake is. It was in the ARN Resources of policy I wrote a wrong topic in the end of policy resources line. I wrote ldr instead of LDRsensor.

\n" }, { "Id": "2584", "CreationDate": "2018-01-24T08:00:55.683", "Body": "

Is there any way for me to upload code to NodeMCU without using micro USB?\nI tried to connect RX TX (NodeMcu) along with TX RX (Arduino) but it failed.

\n", "Title": "Can I upload code to NodeMCU without using micro USB?", "Tags": "|esp8266|", "Answer": "

You can use Over The Air updates (OTA). If you google esp8266 OTA you'll find several examples such as this one

\n" }, { "Id": "2586", "CreationDate": "2018-01-24T20:20:06.310", "Body": "

I'm studying ways to make an IoT device access an user\u00b4s wifi network.\nI know about the WPS way, where the device 'broadcast' a signal and the router, after being commanded to listen, 'receive' it and give the device it's access.\nThere's also the way in which the device create it's own access point, the user connect to it to pass the SSID and Password of the home network.\nAre ther other ways to accomplish that?

\n\n

I've read this article that talks about this 'ProbMe' method:

\n\n

I may be wrong, but the Broadlink Rm Pro may do something like this. Recently I configured one of those and I just had to:

\n\n
    \n
  1. Connect my smartphone to my wifi

  2. \n
  3. Scan an QR code or type a code in Broadlink App

  4. \n
  5. Input my network SSID and Password in Broadlink App

  6. \n
  7. And, I do not know how, device is connected to my network, the app even has its MAC address.

  8. \n
\n\n

Do any of you know more about this ProbMe method and/or other alternatives to give an IoT device access to an user wifi network?

\n\n

Edit: Searching about the ProbMe, I've found out that this is a technology from a company called Econais, just sharing if any of you want to develope/produce something similar, the company works with IoT software/hardware.\nDisclaimer: I do not work at Econais nor I'm involved with it in any way.

\n", "Title": "How can a device get the SSID and Password of my Network without WPS?", "Tags": "|networking|wifi|communication|protocols|", "Answer": "

Most devices I bought (IP cameras, light switches, power sockets) were using ultrasonic communication behind the scenes. Have a look at brands such as Chromecast, Lisnr & Chirp.

\n\n

When the device is in configure mode, you have to hold your smartphone close to the device and the client app will send out an audio signal (could be audible or inaudible), with the SSID and password modulated in. Sounds like stone age, but it works with no frills :)

\n" }, { "Id": "2589", "CreationDate": "2018-01-25T02:58:46.057", "Body": "

I have been chasing a problem with seemingly-random and infrequent disconnects due to \"Socket error on client\" between my ESP8266 client (PubSubClient 2.6.0) and my Mosquitto broker on Raspberry Pi (mosquitto 1.4.10).

\n\n

I have been reviewing the various log files and debug statements to try and figure out what is going on. While following the log file (tail -f /var/log/mosquitto/mosquitto.log), I noticed that PINGREQ and PINGRESP record pairs are not populated at the same time, even though their timestamps are identical. PINGRESP does not show up until the next PINGREQ is received. Is this normal behaviour?

\n\n

An example is provided in the screenshot below. The orange row showed up first. The blue rows were added together in the log file 15 seconds later, even though they do not have the same timestamp. In fact, the PINGRESP timestamp is identical (1516832931) to the PINGREQ above (in orange).

\n\n

I would expect PINGRESP to follow almost immediately behind PINGREQ (i.e .the timestamps make sense). I just want to make sure that is actually happening given the several seconds lag I observe in tail. I think this is important because I assume the disconnects are from a keepalive violation, with my keepalive timeout at 15s.

\n\n

\"enter

\n", "Title": "How long between PINGREQ and PINGRESP on Mosquitto broker?", "Tags": "|mqtt|mosquitto|", "Answer": "

It seems the \"lag\" observation is just a side-effect of the logging process, and not a real delay in the data flow.

\n\n

Following the suggestion of @hardillb, I installed tshark on the Raspberry Pi. By observing the request and response packets between the Pi (192.168.0.104) and the ESP8266 (192.168.0.117), I can see that they are within fractions of a second (i.e. there is no 15 second delay responding from the MQTT broker).

\n\n

\"tshark

\n\n

steps taken:

\n\n
sudo apt-get install tshark\ncd /home/pi\nsudo nano espcapture.pcap\nsudo chown root:root espcapture.pcap (because tshark runs as root)\nsudo tshark -i wlan0 -w /home/pi/espcapture.pcap\n<wait 2 min>\nCTRL+C\n
\n\n

I then transferred the resulting espcapture.pcap file to my PC, and opened it in Wireshark for analysis / filtering.

\n" }, { "Id": "2596", "CreationDate": "2018-01-27T23:13:24.183", "Body": "

Current smart homes are just a particular IPv4 private network comprising new embedded devices (e.g., Thermostat) as well as generic computing devices (e.g., PCs) connected to the Internet through gateways (Figure 1).

\n\n

\"enter

\n\n

With the adoption of IPv6 protocol, thess embedded devices could be uniquely addressable, and are able to be connected directly to the Internet through 6LoWPAN Border Routers (Figure 2).

\n\n

\"enter

\n\n

I found that all smart home platforms are implemented according to the first network architecture, either hub-based, cloud-based, or both (e.g., Samsung\u2019s SmartThings).

\n\n

I am wondering if there are also 6LoWPAN based smart home platforms in the market?
\nI can\u2019t find them!

\n\n

Why is this type of topology not widely adopted?

\n", "Title": "Are there 6LoWPAN based smart home platforms?", "Tags": "|smart-home|6lowpan|", "Answer": "

Thread is a 6LoWPAN-based protocol which is starting to get some adoption by manufacturers. As noted on their website, it's backed by various companies such as Nest, Yale, ARM and Qualcomm and is probably the most likely candidate for any 6LoWPAN implementations in future in the smart home.

\n\n

So far, adoption hasn't been great though. Their website advertises a grand total of two Thread-ready devices, and these are based on a proprietary extension to the open Thread protocol, called Nest Weave.

\n\n

This article talks about one of the first Thread border routers coming to market, produced by Eero, but the situation seems relatively unchanged in terms of adoption since that article was written a few months ago:

\n\n
\n

Much like Wi-Fi, Bluetooth, ZigBee, and Z-Wave, [Thread] is another way for devices to communicate wirelessly throughout the home. But unlike those well-established protocols, Thread has practically no adoption among device makers right now.

\n
\n\n

As for why that's the case, I think the obvious answer is that Thread is much 'younger' than its competitors, and the pieces needed to make it useful haven't existed for long, like the border router, so making a Thread device is a little risky for manufacturers. Hopefully this will change, and some think that Thread could potentially become very widely adopted... But it's impossible to say at the minute.

\n\n

It's worth noting that competitors like SmartThings hubs which support ZigBee and Z-Wave have been around since 2013, and the ZigBee protocol itself has existed since 2005. As Thread matures, perhaps it will supplant the existing options \u2014 or perhaps, manufacturers will stick to the 'safe' options of ZigBee and Z-Wave which are having more iterative improvements than radical changes of protocol.

\n" }, { "Id": "2598", "CreationDate": "2018-01-28T12:08:51.313", "Body": "

I bought a smart wall switch.

\n\n

The switch it replaced had only 2 wires (in and out).

\n\n

My knowledge of electricity might be only limited, but how can this work?

\n\n

A normal switch just opens a circuit so the current no longer goes through right? So then if there is no longer any current, how does the smart switch gets power?

\n", "Title": "How does a smart switch get its power?", "Tags": "|smart-home|", "Answer": "

3 ways:

\n\n
    \n
  1. Wire themselves between supply-hot and neutral. They obtain power in the normal and ordinary way.

  2. \n
  3. Wire themselves between supply hot and Safety Ground. This is either illegal foreign dreck, or a reputable maker has appealed to Underwriter's Laboratories (UL) for a waiver to do this, because presumably they've shown UL that the device won't degrade grounding protection and electrify all the grounds e.g. if a ground wire breaks. NFPA has told UL to quit doing that.

  4. \n
  5. Wire their electronics in series with the incandescent bulb, and then leak power through the bulb to power themselves. This will make the incandescent glow, too dimly to be seen. CFL and LED driver circuits will have a different reaction. They will either

    \n\n
      \n
    • glow visibly (because they are more efficient than incandescent)
    • \n
    • allow the trickle of current to pass, but only if they are designed to do this
    • \n
    • block the current and prevent the switch from working
    • \n
    • take damage or burn up, potentially starting a fire
    • \n
  6. \n
\n\n

One solution to #3 is that a specially made \"resistor\" modules (it's not quite a resistor) can be added in parallel with the light(s) to facilitate that leakage current. Never use random electronics parts in mains electrical installation, always use products UL-listed for that use. Several companies make \"resistor\" modules that will solve this problem, and these are readily available.

\n" }, { "Id": "2617", "CreationDate": "2018-01-31T14:19:41.237", "Body": "

I want to exploit a specific binary on a embedded device First Part of Examining IP Camera. As this binary won't execute outside the embedded device I'm heading to examine it remotely.\nIt has an SD card slot for storing pictures and videos. I cross compiled dropbear statically for this platform and executed it with success.

\n\n

For allowing my host to ssh into the target device I have to add the id_rsa key into the ~/.ssh/authorized_keys of the device. As root is mounted completely read only

\n\n
/mnt/disc1/dropbear_armv5 # touch ~/.ssh\ntouch: /root/.ssh: Read-only file system\n
\n\n

I would like to find a way to specify the location of the keys elsewhere or execute it without using any keys. Knows someone a way of achieving this? In case I misunderstood principles of doing this help would be appreciated too.

\n\n

On the target

\n\n
/mnt/disc1/dropbear_armv5 # ./dropbear_static -r dropbear_dss_host_key -r   dropbear_rsa_host_key -B -E\n/mnt/disc1/dropbear_armv5 # [27893] Jan 31 22:09:57 Running in background\n[28019] Jan 31 22:10:28 Child connection from 192.168.12.1:47398\n[28019] Jan 31 22:10:29 Login attempt for nonexistent user from 192.168.12.1:47398\n[28019] Jan 31 22:10:29 Login attempt for nonexistent user from 192.168.12.1:47398\n[28019] Jan 31 22:10:32 Exit before auth: Exited normally\n
\n\n

On the host

\n\n
# ssh -i ip_cam_rsa root@192.168.12.176                         :(\nroot@192.168.12.176's password: \n\nPermission denied, please try again.\n
\n", "Title": "Running Dropbear SSH server completely from SD card because filesystem is read only", "Tags": "|arm|", "Answer": "

Dropbear hard-codes the location ~/.ssh/authorized_keys in its source code, where ~ is the home directory of the target user as read from the user database. If you can't change the user database and can't make the home directory read-write, then you need to modify the source code.

\n\n

You may be able to make the home directory read-write by mounting a different filesystem over it. That depends what tools are available on the device. For example, maybe you can arrange to mount an in-memory filesystem:

\n\n
mount -t tmpfs root /root\ncp -Rp /somewhere/writable/root/.ssh /root/\ndropbear\n
\n\n

Or maybe you can make a bind mount:

\n\n
mount --bind /somewhere/writable/root /root\ndropbear\n
\n" }, { "Id": "2621", "CreationDate": "2018-02-03T20:19:13.487", "Body": "

I hadn't realised that the Argos satellite network existed and animal trackers communicated with it.

\n\n

Tracking technology.

\n\n

Does anyone know how much it costs and what level of power is required to communicate with GPS satellites?

\n", "Title": "GPS ground communications expensive (financially and power requirement)?", "Tags": "|gps|", "Answer": "

The closest thing to what you appear to be looking for is probably the Iridium satellite network. This system is used by tracking/data logging products that need to work pretty much everywhere (including the middle of the ocean where there is no cell coverage).

\n\n

Examples of systems using Iridium include things like the Rock7 RockBLOCK. The link provides details of power/cost requirements

\n\n
\n

your host needs to supply a minimum of 100mA @ 5V.

\n
\n\n

...

\n\n
\n

Line rental costs \u00a310.00 per month

\n
\n\n

...

\n\n
\n

Credits are used each time you transmit. 1 credit is used per 50 bytes (or part thereof) of message sent or received.

\n
\n" }, { "Id": "2638", "CreationDate": "2018-02-11T16:34:33.197", "Body": "

Problem

\n

I am unable to receive the signals sent with a KaKu APA3-1500R remote with a RF receiver connected to a Raspberry Pi 3.

\n

Hardware

\n

-RF Receiver (in Dutch)
\n-KaKu APA3-1500R (in Dutch)

\n

Additional info

\n\n

Question

\n

How should I proceed to solve this problem? Perhaps there is a way to read the "raw" values received by the RF receiver, without specifying any protocol?

\n", "Title": "Unable to receive RF signals from a remote using a RF receiver", "Tags": "|smart-home|raspberry-pi|", "Answer": "

Finally I managed to record and successfully playback the rf signals using this git repo.

\n" }, { "Id": "2641", "CreationDate": "2018-02-13T14:45:21.210", "Body": "

I am using Echo Dots and the Tradfri Gateway and have seven Ikea Tradfri bulbs in two rooms. These bulbs show up under devices and can be put into groups, which work perfectly with Alexa voice commands.

\n\n

I'm trying to create a routine using these bulbs that would involve turning on the TV/Stereo and dimming the lights. The scene from my Harmony Hub for controlling the TV/Stereo works perfectly. My problem comes from the bulbs. They don't show up when I go to Add Action \u2192 Smart Home \u2192 Control device. The only smart home device showing up is the Harmony Hub.

\n\n

Since they are definitely detected by the app and on the device list, I'm wondering if anyone has a suggestion for how to get them to show up under routines.

\n", "Title": "How to use Tradfri Lights with Ikea Tradfri Gateway within Alexa routines?", "Tags": "|smart-home|alexa|amazon-echo|ikea-tradfri|", "Answer": "

It seems that the app or firmware has been updated now. It is working perfectly, using the process I described in the question (i.e. using control devices).

\n" }, { "Id": "2644", "CreationDate": "2018-02-14T10:54:45.667", "Body": "

I have a multi-Alexa setup at home. It's a three storey house, so an Echo Plus on the bottom floor and 2nd gen Echos on the other floors.

\n\n

The hue bulb that came with it worked fine when it was plugged in on the bottom floor. I've now moved it to the top floor, and it no longer connects. I've tried removing the device from Alexa and adding it again, but when I search for devices, it doesn't find any.

\n\n

So my question is this: Does the Hue bulb need to be connected to an Echo Plus? Or if there's an Echo Plus in the network, it should be able to connect (my original assumption)?

\n", "Title": "Do Hue bulbs need to be connected to an Echo Plus directly?", "Tags": "|smart-home|alexa|amazon-echo|philips-hue|", "Answer": "

The Hue bulb must be connected to the Echo Plus itself.

\n\n

Philips Hue bulbs communicate using a protocol called ZigBee (as explained in a little more detail here). The Echo Plus has a built-in radio/smart hub to communicate using ZigBee with other ZigBee-compatible devices, but all other Echo devices do not have any support for ZigBee \u2014 they operate using Wi-Fi only.

\n\n

Your other Echos physically can't communicate with the light bulb \u2014 they don't 'speak' the correct language, so it's not just the hub features that are required from the Echo Plus; it's the actual wireless connection that's required.

\n\n

You can probably expect about a 10 to 20 metre range, perhaps less in a challenging environment, from a ZigBee hub. Your bulb needs to be within that range to be able to communicate with your Echo Plus and connect to the network. This might involve moving either the bulb or the Echo Plus itself.

\n" }, { "Id": "2647", "CreationDate": "2018-02-14T19:36:03.153", "Body": "

I have my IKEA Tradfri bulbs connected through the Tradfri Gateway. They are all showing version 1.2.214, which the app says is the latest version, as well as the gateway which is 1.3.14.

\n\n

Inside the Alexa app, I've installed the bulbs and they show up and work perfectly (outside of routines), but the Type listed under device settings is other. If they were showing up as light instead of other, I would be able to just say, \"Alexa, lights on.\" vs \"Alexa, turn on Bedroom/Living Room\" as they would be associated with the Echo Dot in each room's group.

\n\n

Has anyone figured out a way to change the Type from other to light?

\n", "Title": "IKEA Tradfri bulbs showing up as \"other\" in the Alexa App, instead of \"light.\"", "Tags": "|smart-home|alexa|amazon-echo|ikea-tradfri|", "Answer": "

You can not change the type from the app side, it needs updating from the Alexa skill.

\n\n

So you will need to wait for IKEA to update the cloud side of their app to include device types.

\n" }, { "Id": "2658", "CreationDate": "2018-02-18T01:10:03.270", "Body": "

When a Tile tracker is paired with a phone, double tapping a button on the Tile will make the phone ring. This happens even if the phone is on silent mode. I\u2019ve accidentally rung my iPhone at work multiple times, disturbing the people around me.

\n\n

I want to leave the Tile paired to my phone so I can track my personal belongings, but I don\u2019t want my phone to ring. Hiding my phone in the Tile app had no effect.

\n\n

How can I disable the option to ring my phone from a Tile?

\n", "Title": "How do I disable phone ringing from my Tile?", "Tags": "|bluetooth|ios|tile|", "Answer": "

For iOS users, an option to disable "Find Your Phone" was added in Tile v2.28.1.

\n\n

According to a comment by Geoff, this feature was also added to Android sometime prior to May 4, 2021

\n" }, { "Id": "2659", "CreationDate": "2018-02-18T09:19:49.210", "Body": "

Scenario
\nIoT device (currently IPv4 device) that sends via TCP socket a payload to a server once per day. The server has a public IP address, the device is behind a router/NAT. I'm going to use a module based upon ESP8266 (i.e. Olimex one)

\n\n

Goal
\nThe server should be able to send data to any client whenever it needs to.\nI'm no interested in direct client-to-client communication (i.e. connect to a device from my smartphone) like the hole punching is supposed to do.

\n\n

Other requirements
\nThe IoT devices might grow up to several thousands. Their Internet connection is provided by many 4G-enabled routers/modems. Each one will handle 10-20 clients.

\n\n

Proposed solution
\nAs far as I understand a common solution is MQTT. The clients periodically send data to the broker (i.e. Mosquitto running on the hosting server), that in turn updates the main web app that runs on the same server.

\n\n

Question
\nIs MQTT approach suitable for a \"large\" number of devices (1000+) most of them behind a 4G router?

\n", "Title": "Is MQTT scalable with 1000+ clients?", "Tags": "|mqtt|wifi|routers|", "Answer": "

You can use persistent sessions from clients, e.g. clean flag set to false upon connect. In that scenario event when your client is offline broker will buffer message for it into own cache and deliver it once device will connect.

\n\n

About the quantity - 10K is a relatively low amount even for one server. You can configure Linux server to hold 500K active connections and if your broker will be cloud-based, e.g. provided as service by some provider, then you can hold even millions of active connections to it.

\n\n

By the way, I think Mosquitto or any other local installation is perfect choice for development and testing, but when you will go in production you need SaaS MQTT broker with all features like HA, redundancy, failover, etc.

\n" }, { "Id": "2660", "CreationDate": "2018-02-18T10:00:55.357", "Body": "

Scenario\nIoT device (currently IPv4 device) that sends via TCP socket a payload to a server once per day. The server has a public IP address, the device is behind a router/NAT. I'm going to use a module based upon ESP8266 (i.e. Olimex one)

\n\n

Goal\nThe server should be able to send data to any client whenever it needs to.\nI'm no interested in direct client-to-client communication (i.e. connect to a device from my smartphone) like the hole punching is supposed to do.

\n\n

Other requirements\nThe IoT devices might grow up to several thousands. Their Internet connection is provided by a 4G-enabled router/modem.

\n\n

Proposed solution\nAs far as I understand a common solution is MQTT. The clients periodically send data to the broker (i.e. Mosquitto running on the hosting server), that in turn updates the main web app that runs on the same server.

\n\n

Question\nCan the web app send data to any client whenever it needs through the broker? In other words: can a subscriber send back data to a specific publisher asynchronously (i.e. without waiting for the next transmission) ?

\n", "Title": "MQTT: Can a subscriber send data to a producer asynchronously?", "Tags": "|mqtt|", "Answer": "

Any MQTT client can both subscribe and publish, there is no distinction between them (only possible ACL rules controlling which users can do what).

\n\n

Also there is no concept of a given client sending data to another client. Messages are published to topics, not other clients. There is nothing to stop a given client subscribing to a specific topic that other clients can then use to send messages to that client.

\n\n

There is also no need to wait for a incoming subscription before publishing a message on a topic.

\n\n

MQTT v5 adds the concept of request/reply style messaging, but the way it does this is by including an extra topic field in a message. This extra topic can be read by a subscriber and used to publish a reply message. But it is only there as a hint not a hard requirement.

\n\n

Web Apps can use MQTT over Websockets to connect to the broker and behave in just the same way as any other MQTT client.

\n" }, { "Id": "2663", "CreationDate": "2018-02-18T14:43:54.680", "Body": "

I'd like to monitor a state of rooms in my apartment and turn off heating/cooling/lights when nobody is there.

\n\n

I see there is a bunch of PIR sensors on the market, however I don't understand why some of them are labeled as \"motion sensors\" and some as \"presence sensors\", and whether there a real difference between them.

\n\n

The experience with motion sensors usually installed on the stairs outside of apartments clearly tells me that motion sensors are not good for the task.

\n\n

So, are there sensors which can reliably detect presence/absence of people in a room?

\n", "Title": "How to detect a presence (not motion) of anyone in a room?", "Tags": "|sensors|", "Answer": "

This Adafruit article, PIR Motion Sensor, provides a description of a PIR sensor and the basic mechanics of how a PIR sensor works. A synopsis of the how a PIR sensor is designed.

\n\n
\n

PIRs are basically made of a pyroelectric sensor (which you can see\n below as the round metal can with a rectangular crystal in the\n center), which can detect levels of infrared radiation. Everything\n emits some low level radiation, and the hotter something is, the more\n radiation is emitted. The sensor in a motion detector is actually\n split in two halves. The reason for that is that we are looking to\n detect motion (change) not average IR levels. The two halves are wired\n up so that they cancel each other out. If one half sees more or less\n IR radiation than the other, the output will swing high or low.

\n
\n\n

Later in the article which is explaining in more detail about the design of the PIR sensor.

\n\n
\n

The PIR sensor itself has two slots in it, each slot is made of a\n special material that is sensitive to IR. The lens used here is not\n really doing much and so we see that the two slots can 'see' out past\n some distance (basically the sensitivity of the sensor). When the\n sensor is idle, both slots detect the same amount of IR, the ambient\n amount radiated from the room or walls or outdoors. When a warm body\n like a human or animal passes by, it first intercepts one half of the\n PIR sensor, which causes a positive differential change between the\n two halves. When the warm body leaves the sensing area, the reverse\n happens, whereby the sensor generates a negative differential change.\n These change pulses are what is detected.

\n
\n\n

So obviously a PIR sensor will not fit your requirements since a PIR sensor depends on motion of an infrared (IR) source that is moving across the field of view of the sensor.

\n\n

However there are IR cameras, FLIR (Forward Looking InfraRed) cameras is the standard name, which can be used with the appropriate image processing software to \"see\" IR emitting objects.

\n\n

This SparkFun article, FLIR Lepton Hookup Guide, describes using a FLIR camera along with the SimpleCV open source framework to build a computer vision device which in this case would be \"seeing\" in the infrared spectrum.

\n\n

Here is a PDF of a slide package, Computer Vision Using SimpleCV and the Raspberry Pi, which provides a nice overview of using the SimpleCV framework to do camera image processing.

\n\n

Here is a wiki with a detailed article, How to install FLIR Lepton Thermal Camera and applications on Raspberry Pi, which may be helpful.

\n\n

There is also the Omron D6T series of thermal sensors which may provide what you need.

\n\n
\n

Omron's D6T Series MEMS Thermal Sensors are a super-sensitive infrared\n temperature sensor that makes full use of Omron's proprietary MEMS\n sensing technology. Unlike typical pyroelectric human presence sensors\n that rely on motion detection, the D6T thermal sensor is able to\n detect the presence of stationary humans by detecting body heat, and\n can therefore be used to automatically switch off unnecessary\n lighting, air conditioning, etc. when people are not present.

\n
\n\n

You may also find this Automated Elephant Detection project helpful.

\n" }, { "Id": "2666", "CreationDate": "2018-02-19T17:45:01.037", "Body": "

I think it was hard for me to formulate the title. However, I can still explain my problem in more detail in here.

\n\n

I am designing an embedded product which consists of cloud service and embedded hardware. The cloud service will have a REST-api (though it being available is not the selling point) and it will communicate with the embedded hardware. Ideally, the embedded hardware would also have a REST-like interface for communication. The problem is finding good software stack for it (or that's what I think is the problem).

\n\n

The best option in my opinion would be some kind of embedded Linux distribution with its own web app installed inside (Ubuntu Core + Django...?). Is it somehow possible to use this combination in a commercial product but at the same time keep the webapp inside closed?

\n\n

The Ubuntu website provides multiple supported platforms and some of them look ideal for my use case. Like I said before, my main concern at the moment is licensing.

\n", "Title": "Building an IoT product - what is the best way to avoid sharing in-house proprietary code?", "Tags": "|hardware|ubuntu-core|", "Answer": "

First of all, note that I'm not a lawyer. Get one if you think you need legal advice. Licensing is one such area where I'd recommend one.

\n\n

Open source licenses vary greatly in what they allow. Let's use the example of a library that you're using (unmodified) in your project. Two common licenses you might find are GPL and LGPL, which vary on how they treat this issue. From this article, for example:

\n\n
\n

The GNU Project has two principal licenses to use for libraries. One\n is the GNU Lesser GPL; the other is the ordinary GNU GPL. The choice\n of license makes a big difference: using the Lesser GPL permits use of\n the library in proprietary programs; using the ordinary GPL for a\n library makes it available only for free programs.

\n
\n\n

Other license examples which are a bit more open in this regard include MIT and BSD.

\n\n

A lot of Linux software is GPL, and this will likely include components of any OS you select (e.g. Ubuntu Core). However, as long as your project isn't considered a derivative work from these projects you shouldn't be affected. More info in this answer.

\n\n

From this perspective, using Ubuntu Core for your product shouldn't affect whether or not the application you ship on it is open or closed. Indeed, packaging your application as a snap is a good way to distribute binary blobs.

\n\n

You've probably considered this, but from a technical perspective, if you ship a Python snap using Django, the snap won't be binary blobs-- by default your code will be sitting there for anyone who wants to see it (either by dumping the disk contents or by gaining shell access somehow). You may want to obfuscate or ship bytecode instead, etc.

\n" }, { "Id": "2679", "CreationDate": "2018-02-26T03:00:28.620", "Body": "

Looking for feedback from others who have been using the SIM53xx series modules for IoT solutions over the 3G network. I have an MQTT broker on the internet to collect data for a fleet tracking system being developed. There is a lot of information about this type of solution with 2G connections but as 2G is being phased out I don't have choice but use 3G as the newer NB1 / CAT M1, sigfox etc. solutions aren't available on the small Pacific island this solution is targeted to and LoRa doesn't have sufficient range (the island isn't that small!).

\n\n

After a lot of research it seems the SIM5320 is the most applicable solution (reasonable cost, popular, small, has GPS features) and there is also a whitepaper that states extensions to the AT command set has extensions for MQTT session setup, pub/sub etc. Perfect for my application, however it is only referred to in an isolated whitepaper simcom mqtt 3g and there is no other references to it in other simcom documentation or on the net.

\n\n

Before I go out and purchase a couple for testing, can anyone confirm if the MQTT extensions actually exist and if they are reliable?

\n\n

Also any feedback on reliability / support for simcom modules would be appreciated as I'm new to IoT over 3G.

\n", "Title": "3G SIM5320 support for MQTT", "Tags": "|mqtt|mobile-data|", "Answer": "

I used the SIM5320 in a product and none of the modules I bought have the MQTT functionality so you must implement the MQTT protocol in a external processor or MCU. Maybe you have to request a different firmware at the moment of purchase.

\n\n

About the reliability, I think that the modem functionality is reliable enough, I sometimes find that the modem gets stuck (could be linux driver or modem firmware) and a hard reset makes everything work again, no data loss.

\n\n

I think that the GPS works fine most of the time, sometimes it stops sending useful NMEA sentences, it just sends a propietary one ($PSTIS*61), sometimes it losses the fix and its unable to recover by itself and I have to reset the module (a lot of the times it doesn't work) and sometimes it gives wrong fixes, about 1000 km wrong but this can be filtered in software.

\n\n

The GPS part of these modems lack configuration options and commands, you can't change the fix rate, you can't make a cold start without first disabling the GPS and this is problematic because if you have no fix the GPS can refuse to be disabled with the excuse to be downloading ephemeris data for faster next startup.

\n\n

I think your best option would be to try the SIM5360, as it has two serial ports, one for modem/GPS and one exclusive for GPS and also supports GLONASS. The SIM5320 has only one serial port and only supports GPS.

\n\n

Anyway I would include in the design an optional independent GPS module (u-blox, simcom, etc). I think that using a SIMCOM 3g module and a independent GPS module is still cheaper than using a 3g modem from other manufacturer (at least for small scale).

\n\n

About the support, they seem to not offer direct support, however the distributor I bought from offered plenty of support free of charge, they asked me for logs, pass me tools for the operating system I was using and answered very technical questions.

\n" }, { "Id": "2683", "CreationDate": "2018-03-01T22:12:55.667", "Body": "

I am implementing a light controlling with MQTT/node which consists of some elements mainly these: device (behind a NAT), server (mqtt/broker), client (web browser)

\n\n

Part of the architecture/process I came up with was: \nThe device needs to open a socket with the server and keep it open (and not the other way because of NAT), so whenever the client sends a control command to the server, the server sends it to the device via the opened websocket.

\n\n

So my broader question would be:\nHow are device-behind-NAT/server connections handled normally in IoT?

\n\n

NOTE: I\u2019ve seen lots of questions explaining the case when a device writes to the cloud and then a client reads from it, which doesn\u2019t need the socket open all the time (just when the device writes) But haven\u2019t seen the case when the server/client want to write to the device (from outside the NAT)

\n", "Title": "Should I keep open a socket between IoT NAT device and server?", "Tags": "|smart-home|system-architecture|", "Answer": "

They are handled by the device connecting out and maintaining an on going TCP connection.

\n\n

TCP connections are bi-directional once opened so as long as the device opens the connection outbound through the NAT gateway the cloud can push information/commands back down that link.

\n" }, { "Id": "2695", "CreationDate": "2018-03-04T11:36:43.707", "Body": "

I am trying to pull a few bytes per second from a microchip board attached to a temperature sensor and a photodiode, using Windows 10 IoT on a Dragonboard 410c. It seems like there's either interference when I attempt to read from more than one GPIO at a time. How can I get input from these GPIOs without messing up my clock signal?

\n\n
namespace DashWall\n{\n    public sealed partial class MainPage : Page\n    {\n        private const int CLOCK_PIN = 12;\n        private const int TEMP_PIN = 34;\n        private const int PHOTO_PIN = 33;\n        private const int START_PIN = 36;\n\n        private GpioPin clockPin;\n        private GpioPinValue clockPinValue;\n        private GpioPin tempPin;\n        private GpioPinValue tempPinValue;\n        private GpioPin photoPin;\n        private GpioPinValue photoPinValue;\n        private GpioPin startPin;\n        private GpioPinValue startPinValue;\n\n        private DispatcherTimer timer;\n\n        private GpioController gpio;\n\n        public MainPage()\n        {\n            InitializeComponent();\n\n            timer = new DispatcherTimer();\n            timer.Interval = TimeSpan.FromMilliseconds(10);\n            timer.Tick += Timer_Tick;\n\n            InitGpio();\n            timer.Start();\n        }\n\n        private void InitGpio()\n        {\n            gpio = GpioController.GetDefault();\n\n            clockPin = gpio.OpenPin(CLOCK_PIN);\n            tempPin = gpio.OpenPin(TEMP_PIN);\n            photoPin = gpio.OpenPin(PHOTO_PIN);\n            startPin = gpio.OpenPin(START_PIN);\n\n            clockPin.SetDriveMode(GpioPinDriveMode.InputPullUp);\n            tempPin.SetDriveMode(GpioPinDriveMode.InputPullUp);\n            photoPin.SetDriveMode(GpioPinDriveMode.InputPullUp);\n            startPin.SetDriveMode(GpioPinDriveMode.InputPullUp);\n        }\n\n        bool isReading = false;\n        bool bitRead = false;\n        uint curPhoto;\n        uint curTemp;\n        uint bitCount = 0;\n        bool start;\n        bool clock;\n        bool photo;\n        bool temp;\n        private void Timer_Tick(object sender, object e)\n        {\n            handleInput();\n            Time.Text = \"\" + clock;\n            Temp.Text = \"\" + temp;\n\n            if (start && clock && !isReading)\n            {\n                Temp.Text = \"\" + start;\n                isReading = true;\n                curPhoto = 0;\n                curTemp = 0;\n            }\n\n            if (isReading)\n            {\n                if (clock && !bitRead)\n                {\n                    ReadBit();\n                    bitRead = true;\n                }\n                if (!clock)\n                {\n                    bitRead = false;\n                }\n            }\n        }\n\n        private void ReadBit()\n        {\n            uint photoBit = 0;\n            uint tempBit = 0;\n            if (photo) { photoBit = 1; }\n            if (temp) { tempBit = 1; }\n            curPhoto = curPhoto << 1;\n            curPhoto = curPhoto + photoBit;\n\n            curTemp = curTemp << 1;\n            curTemp = curTemp + tempBit;\n\n            bitCount++;\n\n            if (bitCount >= 8)\n            {\n                isReading = false;\n                bitCount = 0;\n\n                //Time.Text = \"\" + Convert.ToString(curPhoto, 2);\n                //Temp.Text = \"\" + curTemp;\n            }\n        }\n\n        private void handleInput()\n        {\n            clockPinValue = clockPin.Read();\n            clock = clockPinValue == GpioPinValue.High;\n\n            startPinValue = startPin.Read();\n            start = startPinValue == GpioPinValue.High;\n\n            photoPinValue = photoPin.Read();\n            photo = photoPinValue == GpioPinValue.High;\n\n            tempPinValue = tempPin.Read();\n            temp = tempPinValue == GpioPinValue.High;\n        }\n\n    }\n}\n
\n", "Title": "Polling multiple GPIOs", "Tags": "|sensors|microsoft-windows-iot|gpio|", "Answer": "

Turns out that the ground cable between the two microcontrollers had been dislodged. Putting that back in place fixed my problem.

\n" }, { "Id": "2706", "CreationDate": "2018-03-07T19:36:50.047", "Body": "

Summary of the issue:

\n\n
    \n
  1. Connecting to test.mosquitto.org or iot.eclipse.org with a keep alive of more than 5 minutes, and everything seems to work just as expected.

  2. \n
  3. Connecting to my broker (both on Azure hosted VMs - one is Mosquitto and one is Emqttd), my clients don't send a ping if the keep-alive is longer than 5 minutes. They just die. The broker disconnects them eventually for not pinging. I'm not using an Azure load balancer, I'm connecting directly to the VM).

  4. \n
\n\n

The thing is, the connected device doesn't know it's been disconnected if its over a cell network (not sure why?)

\n\n

Over an Ethernet network, it'll reconnect itself as it should.

\n\n

Not sure if there is something unique about the Azure VM's that is causing my disconnection/timeout issue with longer keep-alives?

\n\n

Lastly, if I use a 2 minutes or shorter keep-alive, everything works.

\n", "Title": "MQTT Connection Using Keep-Alive > 5 Minutes Silently Disconnects on Azure VM Broker?", "Tags": "|mqtt|mosquitto|emq|azure|", "Answer": "

This.

\n\n

\"tcp_keepalive_time\"

\n\n

I can't remember where I captured this from, so I'm unable to link to source. All pinpoint accurate information though.

\n\n

Yes, Microsoft should bake in more appropriate values into their Azure Linux images.

\n" }, { "Id": "2711", "CreationDate": "2018-03-08T16:06:06.883", "Body": "

We currently have an IoT device, that connect to our servers using a raw TCP connection over 2G, and from times to times the device send a \"keep-alive\"-ish message , as we've seen it's consume less battery rather than to reopen the connection everytime we need to send a message (~once every 1 to 5 minutes)

\n\n

We're thinking about switching to LTE-M, especially as we've seen the eDRX mode would permit us to save potentially a lot on the battery life, however I have the following question:

\n\n

When in eDRX mode, is the TCP connection still open, i.e if I send from the server some data, will the client receive it when it awakes?

\n", "Title": "When using LTE-M's eDRX mode, is the TCP connection still open?", "Tags": "|networking|power-consumption|", "Answer": "

Yes, when using LTE-M's eDRX mode, the TCP connection is still open, but there are some complications depending on which carrier you are using.

\n\n

First of all, sending data from the server to the mobile device will never wake up the mobile device instantly. The mobile device will wake up at the next eDRX interval (although some other non-cellular event could also cause it to wake sooner).

\n\n

Now, the confusing part is whether or not your mobile device will receive the data from the server when it wakes at that next eDRX interval. The GSMA LTE-M Deployment Guide actually discusses this in sections 6.2 and 6.4. The troubling line is in section 6.4:

\n\n
\n

Thus, it means that in case when a LTE-M device is in either PSM or eDRX, mobile terminating messages, depending on MNO choice the messages will either be buffered or discarded

\n
\n\n

To put it in different terms, the cell service provider (e.g. Verizon, AT&T, etc) gets to decide whether they want to buffer the TCP packets to forward your device when it wakes up later, or just discard them.

\n\n

Even on carriers which do buffer data during eDRX sleep periods, they tend to have an upper time limit on the order of minutes.

\n\n

However, your server is likely unaware of the intermittent nature of the eDRX connection, and will be doing it's normal TCP retries, and there is always the chance that one of those retries will be sent during the eDRX window and get through to the mobile device. When this happens, it looks like it \"just works\" even on networks which do discard data, but it's more luck than anything else. (Shortening the eDRX cycle time and using faster TCP retry rates on the server will both improve your luck in this setup.)

\n" }, { "Id": "2715", "CreationDate": "2018-03-10T17:54:45.863", "Body": "

I have an Aeotec Nano Dimmer, and am trying to understand how to dim lights using an external SPST switch connected to it.

\n\n

If I toggle the switch the lights go on and off.

\n\n

If I toggle it and then toggle it back instantly, the light level starts to change, but it seems quite erratic: I can't reliably stop it.

\n\n

The documentation is silent about it.

\n", "Title": "How to use Aeotec Nano Dimmer with a SPST switch?", "Tags": "|dimmer|", "Answer": "

The existing answer isnt quite correct in its conclusion. You cant make the dimmer see a toggle switch as momentary. Sure, you can configure it to be momentary, but it's still a toggle switch. And the problem just changes, it does not go away.

\n

The solution is to use a momentary switch.

\n\n" }, { "Id": "2717", "CreationDate": "2018-03-13T00:00:25.003", "Body": "

As I was looking information regarding Lo.Ra and Lo.RaWAN over these links:

\n\n\n\n

I also have looked many tutorial sites on how to setup a Lo.Ra gateway:

\n\n\n\n

But all these material made me quite nervous and generated me these questions:

\n\n\n\n

I solely cannot understand their purpose of having a registered gateway, can't I have \"something\" that receives the data and sends them over ip-based networks eg. via ethernet cable without the need of registration in https://www.thethingsnetwork.org/ ?

\n", "Title": "Lo.Ra and Lo.RaWan: Why is a gateway needed?", "Tags": "|lora|lorawan|", "Answer": "
\n

can't I have \"something\" that receives the data and sends them over ip-based networks?

\n
\n\n

Yes you can and that's called a gateway.

\n\n

https://www.lora-alliance.org/technology

\n\n

https://www.thethingsnetwork.org/docs/gateways/

\n\n

These services like Things Network and Loriot.io provide software for the gateways, cloud servers (a backend) and API's. You don't need to use them, they just make your life easier, and I believe most services that The things network provide are free.

\n" }, { "Id": "2720", "CreationDate": "2018-03-13T13:42:06.150", "Body": "

As I continue my quest to the Lo.RaWAN I am looking over some tutorials on how to setup a Raspberry Pi Lo.RaWAN gateway so far I found these:

\n\n\n\n

Also I have searched information regarding the topology and I found this link:\n http://www.radio-electronics.com/info/wireless/lora/lorawan-network-architecture.php

\n\n

While I was studying the information provided in links, I noticed that they need something called \"Network Server\" that either can be your own implementation such as: https://github.com/brocaar/loraserver or provided as a service such as https://www.thethingsnetwork.org.

\n\n

So far I understood that the network server actually does the network management through the gateways.

\n\n

But I cannot understand why is needed a \"network server\", I mean why do I need a service to manage the Lo.RaWAN network using a special server?

\n\n

In other words cannot figure out the reason why the Designers of Lo.RaWAN thought: \"OK let the gateway send the trafic into a Network Server\"

\n", "Title": "Lo.RaWAN: Why a network server is needed?", "Tags": "|lorawan|", "Answer": "

It wouldn't be much of a Wide Area Network (WAN) if you just had a single gateway.

\n\n

While you can certainly have nodes report into a single gateway, the more expansive schemes (for example, The Things Network) let multiple gateways coordinate through traditional Internet links, either by all reporting up to a single server or better yet into a network of cooperating servers.

\n\n

This means the network can extended beyond the radio horizon of a single gateway, and instead be the union of the radio horizons of all the gateways cooperating in the network.

\n\n

For something like The Things Network, that means a bunch of bubbles, often clustered in specific areas, but collectively scattered all around the world (though the frequencies authorized for use do differ by region).

\n\n

So in the end the question of \"is it needed\" comes down to \"what do you want to do?\" If a single gateway meets your needs and you can run whatever needs to interact with nodes on that gateway's computer or on clients that connect to it, then no, you don't need another server. But if you want to leverage multiple gateways, you'll need some sort of server infrastructure to help them collaborate.

\n" }, { "Id": "2731", "CreationDate": "2018-03-16T12:54:51.700", "Body": "

I'm currently reading the Eclipse IoT documents for a big scale company.\nKnowing that having a Raspberry as a Gateway means more costs, do you guys think that Eclipse solutions for a MQTT connection will work without a gateway?

\n", "Title": "Eclipse solutions without a gateway", "Tags": "|mqtt|eclipse-iot|", "Answer": "
\n

If your node devices already speak IP protocols, have a reasonable amount of volatile and non-volatile storage, and have an outgoing network connection they should be able to connect directly with an MQTT broker. But if they use some other radio standard like zigbee then you'd need a gateway. Note that a \"gateway\" often need be little more than a process running on a box - you could likely add this to a customizable WiFi router for example, or an existing on-site server. You probably would not want to use a Raspberry Pi in a permanent installation or deployment. - @Chris Stratton

\n
\n" }, { "Id": "2732", "CreationDate": "2018-03-16T13:00:04.920", "Body": "

I am new to MQTT (and home automation in general, I am much more in the systems and dev side), flashed a WiFi switch (Sonoff Basic), connected it to an instance of Mosquitto and Home Assistant and so far everything works fine.

\n\n

When monitoring the Mosquitto bus, I see all kind of messages, such as

\n\n
tele/hass1/LWT Online\ntele/home/room1/switch1/LWT Online\ncmnd/home/room1/switch1/POWER OFF\n
\n\n

I recognize home/room1/switch1 which I defined on my WiFi switch, and the switch then sent some topics prefixed by cmnd (command? that would be surprising as nobody manipulated the switch) and tele (telemetry?). tele/hass1/... is generated by Home Assistant.

\n\n

Are there any standards or commonly accepted practices for the prefixes?

\n\n

The MQTT documentation explains how topics are formatted but does not introduce any structure (except for topics beginning with $) so I guess that, best case, it would rather be a best practice (or practice full stop).

\n", "Title": "Are there standardized MQTT topics?", "Tags": "|mqtt|communication|mosquitto|", "Answer": "

In general, no \u2014 there aren't any standards for topic naming beyond the MQTT specifications.

\n

There are plenty of opinions about how you should construct your MQTT topics, and not a lot of fixed rules. While this is a bit unsettling when you'd like to know exactly what the best practice is, the lack of strict rules does mean you get a lot of flexibility with an MQTT broker.

\n

As you're using Home Assistant, this narrows things down a bit, but more specifically, the topics you're looking at are specific to your Sonoff switch. The API is described in this wiki:

\n
\n\n
\n

The diagram referred to is here, although it is best viewed in the wiki page linked above.

\n

In general, any hierarchy used will be manufacturer or system specific; Sonoff devices will generally follow a documented MQTT topic structure, and other manufacturers might use something different. Not all manufacturers will document their products well (or at all!) \u2014 so take care when buying products.

\n" }, { "Id": "2735", "CreationDate": "2018-03-16T19:23:45.270", "Body": "

I am experimenting with node-red, connecting it to my MQTT broker.

\n\n

The input (listening to a topic) works fine.

\n\n

I also would like to send to a topic some payload ipon some triggers. In the example below, this would be on a query to a webservice:

\n\n

\"enter

\n\n

I did not have a place to add the payload to the published message, where can I set it up?

\n\n

The documentation mentions in the inputs section:

\n\n
\n

payload

\n \n

most users prefer simple text payloads, but binary buffers can also be\n published.

\n
\n\n

and later

\n\n
\n

msg.payload is used as the payload of the published message. If it\n contains an Object it will be converted to a JSON string before being\n sent. If it contains a binary Buffer the message will be published\n as-is.

\n
\n\n

I tried to add {msg: {payload: 'on'}} or {payload: 'on'} in node settings -> inputs but the payload is not carried to the MQTT, which hows an incoming cmnd/home/room1/switch1/power {}

\n\n

\"enter

\n", "Title": "How to add payload to a posted MQTT topic?", "Tags": "|mqtt|node-red|", "Answer": "

The payload comes from the incoming message. It does not make sense to define a static payload in the MQTT-out node.

\n\n

The entry in the node-settings is just a mouse over label for the input, since nodes can only have 1 input it's not that useful, it's more for when nodes have multiple outputs to make it easier to identify which output is which.

\n\n

In the case you have posted a screen shot ofit will take the msg.payload output from the http-in node, which would be the body of a HTTP POST.

\n\n

If want to add a static payload to be published then the easiest way is to add a change node between the http-in and mqtt-out node.

\n\n

e.g. to set the payload to \"foo\"

\n\n

\"enter

\n\n

You can set JSON objects by selecting the type from the drop down on the left hand end of the to field.

\n" }, { "Id": "2742", "CreationDate": "2018-03-19T18:57:40.693", "Body": "

I have several devices talking to my Mosquitto MQTT broker and when listening to incoming messages I get all relevant information except the IP of the subscribed client.

\n\n

Is it possible to configure Mosquitto so that this information is provided together with the topic and payload?

\n\n

This is a possible security issue if the ACLs are not configured properly so I can understand that the feature is disabled by default. In my case security is not a concern.

\n", "Title": "Can mosquitto publish the IP of clients?", "Tags": "|mqtt|mosquitto|", "Answer": "

No

\n\n

MQTT is a lightweight protocol, it carries nothing in the headers except what is needed (Topic, QOS & Retained flag).

\n\n

It also goes against the pub/sub philosophy that a publisher shouldn't know about who is subscribed to a given topic and a subscriber shouldn't care where the publisher is, just that the information was provided on a given topic.

\n\n

The only way would be to add the information to the payload yourself.

\n" }, { "Id": "2757", "CreationDate": "2018-03-22T20:55:18.287", "Body": "

I'm trying to figure out how I will make an RFM69W LoRa Module work with The Things Network, so I looked over many tutorials, code samples, and libraries. The one I find closest to LoRaWan support is:

\n\n

https://github.com/matthijskooijman/arduino-lmic because of LoRaWan support.

\n\n

But it seems that has a specific chipset support that may not be compatible with adafruit RFM69W.

\n\n

The one that mentioned in adafruit's site is just sending bulk data over an rf channel such as radiohead and powerlab's after a quick look over the libraries for arduino.

\n\n

Has anyone tried to use the chip with LoRaWan specs, if yes then how did you do it, which libraries did you use for arduino?

\n", "Title": "RFM69W and LoraWan", "Tags": "|lora|arduino|lorawan|", "Answer": "

Just to clarify from what I undestand when researching these modules:

\n\n\n\n

If you're trying to use an RF69x for LoRa, that's your issue

\n\n

A quick google search should show you how to get LoRaWan up and running if you have the correct radio module (RFM9xx).

\n\n

Hope this helps.

\n\n

(Link to adafruit learn page for the radio modules I believe you are referring to)

\n" }, { "Id": "2760", "CreationDate": "2018-03-23T12:11:47.280", "Body": "

I would like to replace normal (mechanical) switches at home with Sonoff Touch ones, which I will be able to manage.

\n\n

Today I have this:

\n\n

\"enter

\n\n

and by adding the Sonoff Touch I will have that (P1 is the WiFi module, the whole dashed box is the Sonoff Touch):

\n\n

\"enter

\n\n

The concern I am having is that the lamp(s) which are in series with the switch will lower the voltage on the switch. I do not know yet by how much.

\n\n

Are there practical indication on the voltage range required by a Sonoff Touch to function properly? (the documentation mentions 90~250V AC, I am more looking at some experience based cases)

\n\n

Circuit schematics thanks to CircuitLab

\n", "Title": "What is the working voltage range of a Sonoff Touch?", "Tags": "|wifi|power-consumption|lighting|ac-power|", "Answer": "

Inside the Sonoff, there will be an AC to DC converter like Hi-link HLK-PM03 220V to 3.3V Step-Down Buck Isolated Power Supply Module which is used to power the Wi-Fi module separately. As I have created the same Wi-Fi smart switch using a solid-state relay, ESP-12 generic module and simple LED within an AC-outlet and also created just like sonoff.

\n\n

If the Sonoff is directly connected to the power supply and at the output of sonoff the Lamps are connected in series, there will be no issue of Power Supply.

\n\n

And if the Power supply is given simultaneously to both Lamps in series and Sonoff then also there is no problem, I think.

\n\n

My circuit was like this. For creating a small product as sonoff replace the solid-state relay with generic relays and you will have a small custom product.

\n" }, { "Id": "2762", "CreationDate": "2018-03-25T08:21:49.560", "Body": "

Are single boards always necessary when implementing an IoT connection? Can they be substituted by normal smartphones in, let's say, a business setting so as to communicate to POS/Internet-enabled sale system?

\n\n

I'm not well oriented with Android Things. I just saw the option to include its support when creating a new application in Android Studio. Any and all helpful advice is welcome.

\n", "Title": "In IoT, Is it necessary to involve single-boards such as Raspberry Pi as the only communication devices?", "Tags": "|raspberry-pi|android|android-things|", "Answer": "
\n

Are single boards always necessary when implementing an IoT connection?

\n
\n\n

No, Single Board Computers aren't necessary. Single Board Computers provide a very effective way to build a proof of concept, and are instrumental when the developers are trying to create the software stack. But, for a mass scale production a specific design is created. A specific design has benefits of power, thermal and performance. For example:

\n\n\n\n
\n

Can they be substituted by normal smartphones in, let's say, a business setting so as to communicate to POS/Internet-enabled sale system?

\n
\n\n

Using a smartphone will be a overkill, as smartphones are built to be multipurpose. And re-purposing a smartphone would hurt the solution in following aspects:

\n\n\n" }, { "Id": "2766", "CreationDate": "2018-03-26T13:07:41.990", "Body": "

For a Lora assay I am looking over the arduino lmic library furthermore I look over the pinout on RFM98 Lora Module:

\n\n

\"enter

\n\n

But I cannot figure out how I will configure the library in order to use the MOSI and MISO pins (as far as I understand the recieved information will pass as serial in MOSI pin and I will send data in MISO pin) I mean the library provides this configuration:

\n\n
lmic_pinmap lmic_pins = {\n    .nss = 6,\n    .rxtx = LMIC_UNUSED_PIN,\n    .rst = 5,\n    .dio = {2, 3, 4},\n};\n
\n\n

And I understand that .rxtx may be mapped on arduino's pins for data io into LOraWan network (using LoRa Modulation) but which goes first and which goes second?

\n\n

I guess that the configuration code will be:

\n\n
.rxtx={^somepin^,^anotherpin^}\n
\n", "Title": "HopeRf RFM98 and arduino-lmic library", "Tags": "|arduino|lorawan|", "Answer": "

This interface is perhaps not as well documented as it should be.

\n\n

As it turns out, the actual SPI data and clock pins (SCK, MOSI, and MISO) are not specified in the struct; instead they are assumed to be wired consistent with the hardware SPI engine.

\n\n

The arguments that are specified in the struct are limited to things which are configurable. That includes the slave select (NSS) and reset pins.

\n\n

To clarify two points of apparent confusion:

\n\n\n\n

For purposes of this library, packet buffer data is written and read over the SPI bus, rather than on discrete pins. It is possible to configure two of the DIO pins to be a data output and slicer clock, but that is not used here.

\n\n

Some of this information can be gleaned from the README.md of the Arduino-LMIC library, but some of it is only clear in the code. Reading the Semtech data sheet is also useful.

\n" }, { "Id": "2770", "CreationDate": "2018-03-27T11:42:22.707", "Body": "

I need to measure the amount of carbon-dioxide in parts per million (ppm) within a closed environment. Currently, I am using a MQ-135 gas sensor which has some sort of concentration on Carbon-dioxide. Unfortunately, that sensor does not provide accurate values.

\n\n

What gas sensor can I get to obtain precise data?

\n\n

The cost should not be more than USD 30. The MQ-135 has a very slight concentration on CO\u00b2. The value it returns has a high fluctuation rate. Within the difference of seconds the value changes from 450 ppm to 700 ppm. I want a digital sensor that should give accurate data like Carbon Dioxide Meter PCE-WMM 50.

\n", "Title": "What is the best sensor for the measurement of Carbon-dioxide?", "Tags": "|sensors|hardware|arduino|", "Answer": "

Sensor Used: MH-Z19 CO\u00b2 Sensor.
\nIt is a common type, small size sensor, using non dispersive infrared (NDIR) principle to detect the existence of CO\u00b2 in the air, with good selectivity, non-oxygen dependent and long life.

\n\n

Output Modes: UART and PWM wave\nI personally find this useful and used the same in a project. Also you can take help of the following github link to setup with NodeMCU.

\n\n

Details of the sensor can be found in this user manual (PDF).

\n" }, { "Id": "2774", "CreationDate": "2018-03-29T07:56:50.853", "Body": "

This question comes in lieu because everytime you board a plane the crew announces that the passengers turn their phones off or use flight mode.

\n\n

The core reason for this is the wireless adapters in a typical smart phone might affect the Navigation and Other On-Board Systems situated in the Cockpit.

\n\n

It would be rather an exceptional challenge to find a wireless band which can be used in the field of Wireless Sensor Networks.

\n\n

Is there any research done in finding optimal bands that do not interfere with the systems and could present IoT in Aviation as a strong application?

\n", "Title": "Which Wireless Frequency Bands could be preferrable for IoT in Aviation?", "Tags": "|wireless|", "Answer": "
\n

Is there any research done in finding optimal bands that do not\n interfere with the systems and could present IoT in Aviation as a\n strong application?

\n
\n\n

Wireless Avionics Intra-Communications (WAIC), a wireless communication system \n researched by AVSI (Aerospace Vehicle Systems Institute) is being developed to replace complex cabling with wireless in avionics. As per available material, the frequency band 4200 - 4400 MHz is the allocated band for the WAIC system.

\n\n

Some key examples of Potential WAIC Safety Applications

\n\n\n" }, { "Id": "2780", "CreationDate": "2018-03-30T08:50:06.083", "Body": "

I don't have a particular camera in mind right now, I'm just curious how this is done, programmatically/mathematically.

\n\n

I have a 3D space, a rectangle, with a camera up in one corner looking inwards.
\nI have a moving object in that rectangle that's transmitting (x, y, z) coordinates of its current position.
\nI want to take those coordinates and translate them into instructions telling the camera to point at that position.
\nHow is this translation typically done?

\n", "Title": "How can I programmatically tell a camera where to point?", "Tags": "|tracking-devices|surveillance-cameras|", "Answer": "

Great answers already, I'd just like to add a few other things that you should take into consideration. Like hardlib and Goufalite have already mentioned, the way to do this is trigonometrically. I've drawn out a 2-d depiction of the camera and the IoT object:

\n\n

\"enter

\n\n

As you can see, the camera's field of view is going to be larger than the object - if not in close range, when the object moves further away.

\n\n

Now, you may want the camera always centred on the object. In that case, you can simply take the calculations that hardlib referenced:

\n\n
\u03f4 = arctan(y/x)\n
\n\n

...which will be the angle counterclockwise from the x-axis, per convention. You'll also need the angle away from level:

\n\n
\u03b1 = arctan(z / ((y^2+x^2)^1/2))\n
\n\n

Obviously, you'll have to calculate based off of the camera position being at the origin in all three axes.

\n\n

On the other hand, you may prefer to not make the camera move more than necessary, that is, to make the camera only move once the object appears to be about to move out of the frame. In that case, you'll probably want a \"pressure\" variable which will make the camera more likely to change its angle based on how close the object is to the edge of the frame.

\n\n

If you go that route, you'll need to know the angle of the camera's field of view in both fields of view, so that you can determine where the object is compared to the camera's field of view.

\n" }, { "Id": "2793", "CreationDate": "2018-03-31T17:02:05.920", "Body": "

Is there any open source firmware or OS for IP cameras like OpenWrt for routers.

\n\n

I need an OS or a firmware for IP cameras in order to communicate with our cloud service easily. I have some experience with OpenWrt that's why I'm thinking in that way.

\n\n

My target is to get an HTTP call (minimum GET call ) from the IP camera. Is there any IP camera which are using opensource firmware?

\n", "Title": "Is there any open source firmware or OS for IP camera like openwrt for routers", "Tags": "|smart-home|digital-cameras|open-source|", "Answer": "

It appears that OpenIPC firmware is exactly like OpenWrt for IP cameras.

\n" }, { "Id": "2797", "CreationDate": "2018-04-03T16:42:45.060", "Body": "

What infra-red codes are emitted by the depicted* remote control used by many hood manufactures (Faber/Mepamsa, AEG, Franke, Smeg, Airlux...)?

\n\n

Identifying the manufacturer of the remote control / IR chip could be enough. It could be listed in lirc, irdb... IR codes repositories (hood manufactures are not listed there).

\n\n

\"remote

\n\n

* I do not have the device, otherwise I would record the codes.

\n\n

The goal is that the Smart hob can inform the hub the frying has started, but I do not know the IR codes to start the hood. The RC is quite expensive, so I rather tried to ask first.

\n", "Title": "What IR codes are emitted by Faber/AEG/Franke/Smeg... RC?", "Tags": "|communication|infrared|", "Answer": "

Light toggle:

\n\n
&\\u0000\\u0014\\u0000\\u0018\\u0017\\u0018\\u00170-\\u0018\\u0017\\u0018\\u00170[\\u0019\\u0016\\u0018\\u0017H\\u0000\\r\\u0005\\u0000\\u0000\\u0000\\u0000\n
\n\n

Fan toggle:

\n\n
&\\u0000\\u0014\\u0000\\u0018\\u0016\\u0019-1\\u0016\\u0018\\u0017\\u0018-1D\\u0019-\\u0018\\u00170\\u0000\\r\\u0005\\u0000\\u0000\\u0000\\u0000\n
\n\n

Captured with a https://www.ibroadlink.com/rmPro+/ and https://github.com/momodalo/broadlinkjs

\n\n

Hope that helps, I've not yet found if there's a secret separate 'on' to 'off' switch, toggling really sucks when trying to connect up to home automation

\n" }, { "Id": "2803", "CreationDate": "2018-04-04T18:02:30.637", "Body": "

I would like to understand if secure pairing can be implemented in all (or none?) IoT RF standards.

\n\n

For example: BLE provides Out-of-band possibility, which seems like only really secure option to me. But I have hard time finding information on security of WiFi, Z-Wave, ZigBee, XBee, LoRa, Ingenu, TI15.4 and other protocols during pairing.

\n\n

Could you suggest some standards which support secure pairing \"out of the box\"?

\n", "Title": "Secure pairing support in various RF standards", "Tags": "|security|protocols|wireless|", "Answer": "

Most modern RF standards/protocols offer or require secure pairing. The appropriate Alliances or Consortia behind the standards have taken the necessary steps in their own way:

\n\n\n\n

You can read further detail on respective websites. Different protocols use different words/Language this process and it can be confusing. The loosely equivalent terms are usually: Paring, Joining, Commissioning, and Provisioning.

\n" }, { "Id": "2810", "CreationDate": "2018-04-06T07:32:20.110", "Body": "

I am developing a helper app to clean the retained messages on my Mosquitto MQTT service. The problem I have is how to process the queue once with Paho MQTT.

\n

I know how to

\n\n

What I would like to do is to process the queue once and quit (and process each of the topics gathered this way, which are (most likely) retained messages)

\n

loop() was the most promising:

\n
\n

Call regularly to process network events. This call waits in select()\nuntil the network socket is available for reading or writing, if\nappropriate, then handles the incoming/outgoing data.

\n
\n

Unfortunately I do not see any topics when using it.

\n

Right now my code starts a thread, waits 2 seconds and stops it. It does the job but I would like to understand how to do that cleanly though a one-pass processing:

\n
import paho.mqtt.client as mqtt\nimport time\n\nclass MQTT:\n\n    def __init__(self):\n        print("initializing app")\n        self.client = mqtt.Client()\n        self.client.on_connect = self.on_connect\n        self.client.on_message = self.on_message\n        self.client.connect("mqtt.example.com", 1883, 60)\n        self.client.loop_start()\n        time.sleep(2)\n        self.client.loop_stop()\n\n    def on_connect(self, client, userdata, flags, rc):\n        print("connected to MQTT with result code " + str(rc))\n        self.client.subscribe("#")\n\n    def on_message(self, client, userdata, msg):\n        # EDIT: added a check for actually retained messages\n        if msg.retain:\n            print(f"removing retained {msg.topic}")\n            self.client.publish(msg.topic, retain=True)\n\nMQTT()\n
\n", "Title": "How to process the MQTT queue once?", "Tags": "|mqtt|mosquitto|paho|", "Answer": "

Retained messages will only be delivered once (per connection).

\n\n

And there can only be 1 retained message on a given topic at any one time.

\n\n

So just connect, start the loop and subscribe to the topics you are interested in. When the message is delivered check the retained flag on the messages. If the message is retained then you can publish a new message with a null payload and the retained bit set to clear that topic. There is no need to do anything strange with the network loop.

\n" }, { "Id": "2813", "CreationDate": "2018-04-07T11:33:20.533", "Body": "

I thought it would be nice to setup my own LoraWan public Gateway in order to extend \"The Things Network\" network. But in order to do that I think that I will need to use a proper antenna in order to achieve maximum range.

\n\n

So I thought that if I use a television antenna (Yagi Uda or similar antennas) it would be an affordable way to achieve a decent range, but would I achieve that or I am saying nonsense?

\n\n

The proposed gateway location is at single storey building (~3m Altitude) at Acharnes, Greece. The possible obstacles will be taller buildings ~10m high. Also in direct sight the is a mountain as well but I do not care about this obstacle. I want to cover as much distance as possible with a budget for antenna less than 20 euros.

\n", "Title": "Using television Yagi antenna on a public homemade LoraWan gateway", "Tags": "|lora|lorawan|", "Answer": "

There is a great variety of omni 868Mhz antennae way under 20\u20ac that can do the job.\nhttps://eu.mouser.com/Passive-Components/Antennas/_/N-8w0fa?P=1z0wn4u&No=25

\n" }, { "Id": "2820", "CreationDate": "2018-04-09T13:41:23.740", "Body": "

As I am searching on libraries that can be used with Lora and arduino I came across on this one: https://github.com/sandeepmistry/arduino-LoRa

\n\n

Also the RFM9x has the following pinout:\n\"RMF

\n\n

The data transfer on this chip is achieved via an SPI Interface. Also as I see with the librarie's API specification is that I can setup the pins Slave Select (ss), reset and dio0 with the function:

\n\n
LoRa.setPins(ss, reset, dio0);\n
\n\n

So I should configure the pins reset and dio0 like that:

\n\n
Parameter Name -> On RMF9x Chip pin\nreset -> RESET\ndio0 -> DIO0\n
\n\n

But I cannot figure out how to connect the ss pin mentioned in the method above, is the on-rmf9x chip the NSS pin?

\n", "Title": "Using Arduino LoRa with RFM9X chipset: How to connect the RMF9x NSS pin?", "Tags": "|lora|arduino|", "Answer": "

As the library's documentation states the RFM9x's NSS pin should get connected into pin 10. Also as you can see in the following image:

\n\n

\"Arduino the pin 10 is the SPI's slave select pin (ss in short) so the NSS pin on the RFM9x is connected into slave select pin.

\n\n

So in the function:

\n\n
LoRa.setPins(ss, reset, dio0);\n
\n\n

The ss variable takes the number of pin that is conected to RFM9x's NSS pin.

\n\n

Furthermore it is obvious that the RFM9x's NSS pin is used for SPI's slave select.

\n" }, { "Id": "2823", "CreationDate": "2018-04-10T09:34:46.797", "Body": "

Scenario

\n\n

I developed a simple board that connects via WiFi and upload some data to a remote server. The network management is up to the customer but now he asks me some hints about how to avoid the limited number of connections allowed.

\n\n

The issue

\n\n

There are thousands of those boards around, but only few hundreds of them are within the range of the router (I only know is a Huawei router that allows 60 WiFi connections at time).

\n\n

Currently the firmware will try to connect every 20 minutes and, if the connection is granted, the time required to do all the stuff is about 1-2 minutes. After that the boards would upload another set of records after other 20 minutes but this time the connection will last for a fraction of second (because it has no data, just a \"keep alive\" signal to the server).

\n\n

Each board will remain in this area for few hours. That means during the whole day the mean quantities of boards in the \"upload area\" is quite constant.

\n\n

With this setup, often happens that a lot of units cannot upload data in their parking slot time, so the next time they have the double of data to be uploaded, and eventually the situation becomes worse.

\n\n

Question

\n\n

Of course the first thing is try to find another router that allows more connections, obviously.

\n\n

My question is related about any software pattern/strategy to improve the overall efficiency.

\n\n

My thoughts: all boards try to connects randomly. They are not tied to a specified hour. Hence we assume the probability at a given time is quite uniform. If the router can accept 60 connections at time, it means that the next one will be refused. If I can catch this error, I would retry more frequently than 20 minutes.

\n\n

In this way I should increase the probability to find a free 'slot' in the same amount of time spent there. Of course I wouldn't do that if I cannot know if the connection is refused for other reasons because it will drain the battery too fast.

\n\n

Any other ideas?\nAre there any router specifically designed for such an IoT application? And, as per your knowledge, the limitation is due to the radio bandwidth or the capability of the router to handle more connections internally?

\n", "Title": "Avoid limited connections of WiFi Router", "Tags": "|wifi|routers|", "Answer": "

A more structured allocation might be better, because you are very tight for available time slots. I think you need to run some monte-carlo simulations of your proposed system too (and check that you can first replicate your current failure timescale).

\n\n

I'd suggest to incrementally slow-down any endpoint which is polling, but failing. Your steady-state should be within the available bandwidth, so the slowdown ought not to impact any/many active endpoints. If you do not back off, it only takes a couple of hundred endpoints to completely DOS your network.

\n\n

You might also need to better characterise the connections, and the start-up state (assuming endpoints are static, but the router got rebooted). When all 60 connections are in use, does the next transmission kick off one of these, block several established connections, or need to wait longer for a slot to be freed up? What is the minimum number of pathologically timed connections to block the system (with worst case overlap of the 2 minute slots)? These ought to be reasonable easy to model in software.

\n" }, { "Id": "2832", "CreationDate": "2018-04-12T14:50:30.573", "Body": "

As far as I know the LoraWan v1.1 Spec uses the following keys:

\n\n\n\n

Also I have seen that also supports the following keys when performs a join procedure:

\n\n\n\n

Also I have looked over a current abp activation code example using IBM's lmic library.

\n\n

And I saw only the having a network session key and an Application session key according to the snippet:

\n\n
static const PROGMEM u1_t NWKSKEY[16] = { 0x2B, 0x7E, 0x15, 0x16, 0x28, 0xAE, 0xD2, 0xA6, 0xAB, 0xF7, 0x15, 0x88, 0x09, 0xCF, 0x4F, 0x3C };\n\nstatic const u1_t PROGMEM APPSKEY[16] = { 0x2B, 0x7E, 0x15, 0x16, 0x28, 0xAE, 0xD2, 0xA6, 0xAB, 0xF7, 0x15, 0x88, 0x09, 0xCF, 0x4F, 0x3C };\n
\n\n

And I wonder why they have only 2 session keys. So I want to ask:

\n\n\n", "Title": "Why does IBM's arduino-lmic example for abp activation have only 2 keys?", "Tags": "|lorawan|", "Answer": "
\n

Do they support the version 1.0.x and not the latest 1.1 version?

\n
\n\n

Indeed.

\n\n

Though I'm not sure of the status of the original LMiC library, the Arduino port you're referring to is based on LMiC 1.5 and indeed only supports LoRaWAN 1.0.x. It's not explicitly noted in its README, but looking at the changes you'll see that the last meaningful commit dates back to August 2017, while LoRaWAN 1.1.x was not released until November that year.

\n\n

Beware that your LoRaWAN provider might not be supporting 1.1 either. (Like the public network of The Things Network only supports 1.0.x right now.)

\n" }, { "Id": "2833", "CreationDate": "2018-04-13T12:48:42.570", "Body": "

I'd been hearing a lot about 5G recently from a few people I know, and one of the frequent claims that I heard was, \"5G is going to be such a great help for the Internet of Things!\"

\n\n

I may be a hopeless cynic, but I began to wonder, just why would this be the case? After all, the majority of IoT doesn't involve the transfer of huge amounts of data. Furthermore, it doesn't seem like an extra second or two of delay will make a lot of difference for your smart washing machine or even your lights. Hey, they already seem fast enough to me.

\n\n

So why would 5G be a plus for the Internet of Things?

\n", "Title": "Why is 5G a plus for IoT?", "Tags": "|protocols|data-transfer|", "Answer": "

One thing that should be noted is that 5G will not just improve network performance from a single user perspective, but will also utilise bandwidth more efficiently; more devices will be served by a single base station.

\n\n

One enabling technology should be of particular interest to the IoT community: Massive MIMO. Massive MIMO has the potential to be used for energy harvesting. From the linked paper:

\n\n
\n

To achieve this, the BSs can transmit dedicated RF signals on the DL\n and perform energy beamforming so as to provide uninterrupted wireless\n energy transfer (WET) to the UEs. Alternatively, the BSs can attempt\n simultaneous wireless information and power transfer (SWIPT), where\n the DL RF signals are used to simultaneously transport both energy and\n information to the UEs. MM is particularly suitable for such RF EH\n applications because the large array gains offered by MM can increase\n the energy transfer efficiency of RF signals

\n
\n\n

To be clear, energy harvesting is not part of the 5G specification and I don't have any good information on the maturity of this technology. In 2016 I attended a presentation from Lund university where I understood that a proof of concept for massive MIMO energy harvesting was planned. I've not heard anything more following.

\n\n

If 5G is a step towards powering IoT devices wirelessly the that is pretty exciting, but event without that the increased device density it will offer is of serious interest.

\n" }, { "Id": "2856", "CreationDate": "2018-04-19T07:50:48.263", "Body": "

How to for cross compiling Paho-MQTT C library for ARM platform.

\n\n

Here are the steps I followed.

\n\n
1) Downloaded library from [https://github.com/eclipse/paho.mqtt.embedded-c][here]\n2) after download, I opened the directory and entered some commands.\n3) command for setting the GCC-ARM tool chain (environment variable_path)\n4) make CC=(ARM-CROSS_COMPILE)gcc\n
\n\n

Next, I observe following error

\n\n
mkdir -p build/output/samples\nmkdir -p build/output/test\necho OSTYPE is Linux\nOSTYPE is Linux\nsed -e \"s/@CLIENT_VERSION@/1.2.0/g\" -e \"s/@BUILD_TIMESTAMP@/Mon Apr 16 17:13:10 IST 2018/g\" src/VersionInfo.h.in > build/VersionInfo.h\narm-cortexa9-linux-gnueabihf-gcc -g -fPIC  -Os -Wall -fvisibility=hidden -Ibuild -o build/output/libpaho-mqtt3c.so.1.0 src/MQTTPersistence.c src/Heap.c src/Socket.c src/MQTTProtocolClient.c src/MQTTProtocolOut.c src/StackTrace.c src/MQTTPersistenceDefault.c src/MQTTClient.c src/Messages.c src/MQTTPacketOut.c src/Clients.c src/OsWrapper.c src/Thread.c src/MQTTPacket.c src/Log.c src/LinkedList.c src/utf-8.c src/SocketBuffer.c src/Tree.c  -shared -Wl,-init,MQTTClient_init -lpthread -Wl,-soname,libpaho-mqtt3c.so.1\nln -s libpaho-mqtt3c.so.1.0  build/output/libpaho-mqtt3c.so.1\nln -s libpaho-mqtt3c.so.1 build/output/libpaho-mqtt3c.so\narm-cortexa9-linux-gnueabihf-gcc -g -fPIC  -Os -Wall -fvisibility=hidden -Ibuild -o build/output/libpaho-mqtt3cs.so.1.0 src/MQTTPersistence.c src/Heap.c src/Socket.c src/SSLSocket.c src/MQTTProtocolClient.c src/MQTTProtocolOut.c src/StackTrace.c src/MQTTPersistenceDefault.c src/MQTTClient.c src/Messages.c src/MQTTPacketOut.c src/Clients.c src/OsWrapper.c src/Thread.c src/MQTTPacket.c src/Log.c src/LinkedList.c src/utf-8.c src/SocketBuffer.c src/Tree.c -DOPENSSL  -shared -Wl,--start-group -lpthread -ldl -lssl -lcrypto -Wl,--end-group -Wl,-init,MQTTClient_init -Wl,-soname,libpaho-mqtt3cs.so.1 -Wl,-no-whole-archive\nIn file included from src/MQTTPersistence.h:23:0,\n             from src/MQTTPersistence.c:28:\nsrc/Clients.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/Socket.c:32:0:\nsrc/SocketBuffer.h:28:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/SSLSocket.c:31:0:\nsrc/SocketBuffer.h:28:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPacket.h:25:0,\n             from src/MQTTProtocolClient.h:25,\n             from src/MQTTProtocolClient.c:34:\nsrc/SSLSocket.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPacket.h:25:0,\n             from src/MQTTProtocolOut.h:24,\n             from src/MQTTProtocolOut.c:35:\nsrc/SSLSocket.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/StackTrace.c:21:0:\nsrc/Clients.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPersistence.h:23:0,\n             from src/MQTTClient.c:53:\nsrc/Clients.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPacket.h:25:0,\n             from src/MQTTPacketOut.h:22,\n             from src/MQTTPacketOut.c:29:\nsrc/SSLSocket.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/Clients.c:24:0:\nsrc/Clients.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPacket.h:25:0,\n             from src/MQTTPacket.c:26:\nsrc/SSLSocket.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/MQTTPacket.h:25:0,\n             from src/Log.c:27:\nsrc/SSLSocket.h:29:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nIn file included from src/SocketBuffer.c:25:0:\nsrc/SocketBuffer.h:28:25: fatal error: openssl/ssl.h: No such file or directory\ncompilation terminated.\nMakefile:219: recipe for target 'build/output/libpaho-mqtt3cs.so.1.0' failed\nmake: *** [build/output/libpaho-mqtt3cs.so.1.0] Error 1\n
\n\n

4) I installed openssl using this command

\n\n
\n

$ sudo apt-get install libssl-dev*

\n
\n\n

Openssl installed successfully, but still getting the same above error .

\n\n

What I have to do for compiling Paho-MQTT-C library for ARM?

\n", "Title": "How to cross compile Paho-MQTT-C library for ARM?", "Tags": "|mqtt|linux|paho|", "Answer": "

You are facing these build errors as you are missing the sysroot corresponding to the arm-gcc that you are using as your CC.

\n\n

You may try changing the Makefile as follows:

\n\n
diff --git a/Makefile b/Makefile\nindex 49cbb13..1123f27 100755\n--- a/Makefile\n+++ b/Makefile\n@@ -67,7 +67,7 @@ endif\n\n ifeq ($(OSTYPE),Linux)\n\n-CC ?= gcc\n+CC ?= /path/to/arm-linux-gnueabi/arm-linux-gnueabi-gcc-4.9.3\n\n ifndef INSTALL\n INSTALL = install\n@@ -82,10 +82,10 @@ VERSION = ${MAJOR_VERSION}.${MINOR_VERSION}\n EMBED_MQTTLIB_C_TARGET = ${blddir}/lib${MQTT_EMBED_LIB_C}.so.${VERSION}\n\n\n-CCFLAGS_SO = -g -fPIC -Os -Wall -fvisibility=hidden -DLINUX_SO\n-FLAGS_EXE = -I ${srcdir}  -L ${blddir}\n+CCFLAGS_SO = --sysroot=/path/to/sysroot -g -fPIC -Os -Wall -fvisibility=hidden -DLINUX_SO\n+FLAGS_EXE = --sysroot=/path/to/sysroot -I ${srcdir}  -L ${blddir}\n\n-LDFLAGS_C = -shared -Wl,-soname,lib$(MQTT_EMBED_LIB_C).so.${MAJOR_VERSION}\n+LDFLAGS_C = --sysroot=/path/to/sysroot -shared -Wl,-soname,lib$(MQTT_EMBED_LIB_C).so.${MAJOR_VERSION}\n\n all: build\n\n@@ -168,8 +168,8 @@ VERSION = ${MAJOR_VERSION}.${MINOR_VERSION}\n EMBED_MQTTLIB_C_TARGET = ${blddir}/lib${MQTT_EMBED_LIB_C}.so.${VERSION}\n\n\n-CCFLAGS_SO = -g -fPIC -Os -Wall -fvisibility=hidden -Wno-deprecated-declarations -DUSE_NAMED_SEMAPHORES\n-FLAGS_EXE = -I ${srcdir}  -L ${blddir}\n+CCFLAGS_SO = --sysroot=/path/to/sysroot -g -fPIC -Os -Wall -fvisibility=hidden -Wno-deprecated-declarations -DUSE_NAMED_SEM\n+FLAGS_EXE = --sysroot=/path/to/sysroot -I ${srcdir}  -L ${blddir}\n\n LDFLAGS_C = -shared -Wl,-install_name,lib$(MQTT_EMBED_LIB_C).so.${MAJOR_VERSION}\n
\n\n

You will need to change add the right sysroot path to CCFLAGS_SO, FLAGS_EXE and LDFLAGS_C variables in Makefile. I have only given examples.

\n\n

When I made these substitutions I was able to build paho.

\n" }, { "Id": "2867", "CreationDate": "2018-04-23T04:31:18.727", "Body": "

I am having issues with receiving the response to the AT commands from the NINA after switching from data mode to back to command mode. I used the \"ATS2?\" command to check the escape character and received just a single 43(DEC) as a response which is basically '+' in ASCII. The section 2.5 of the \"NINA-B1 Getting Started\" manual states the following:

\n\n

\"By default, NINA-B1 will enter command mode and has to be reconfigured to start up in data mode or extended data mode. From the data mode or extended data mode, it is possible to enter the command mode by transmitting escape sequence to the module. By default, the escape sequence is:

\n\n
1. Silence 1 second\n2. +++\n3. Silence 1 second\"\n
\n\n

Here is the NINA-B1 AT commands Manual

\n\n

I did the above in my program and also get an \"OK\", but for some reason after switching from the data mode to command mode, the NINA does not respond to the AT commands. Below is my code snippet for switching from Data Mode to Command Mode.

\n\n
void TS_NinaDataModeToCommandMode(void){\n  _delay_ms(1000);\n  UART_write('+');\n  UART_write('+');\n  UART_write('+');\n _delay_ms(1000);\n}\n
\n\n

Any insights would be helpful.

\n", "Title": "Unable to receive response to AT commands after switching from data mode to command mode", "Tags": "|bluetooth-low-energy|", "Answer": "

After a while of repetitively going through the code and my logic with no luck, I finally decided to probe the signals and check the timing of the 1-second silence preceding and succeeding the escape sequence. After checking the timing, it looked like the silence was less than 1-second, don't remember exactly how much. So I just changed the above code to the following and it started to work.

\n\n
void TS_NinaDataModeToCommandMode(void){\n  _delay_ms(1050);\n  UART_write('+');\n  UART_write('+');\n  UART_write('+');\n _delay_ms(1050);\n}\n
\n\n

I am going to further reduce the delay 1050 ms to lower and check what is the optimal value I can use since these are not accurate delays. I hope this helps!

\n\n

For more details please visit the forum discussion thread here

\n" }, { "Id": "2874", "CreationDate": "2018-04-24T12:28:25.450", "Body": "

I created a LoRa to USB pass-through via Arduino in order to use it as a temporary gateway for development purposes. So somehow I need to know how big a LoRa packet including PHY headers is, in order to determine how long it will take my software to read and send the data on top a USB.

\n\n

In order words my architecture will be as follows:

\n\n

\"Architecture

\n\n

So the PC/Laptop will need to \"know\" how big the LoRa packet will be in order not to receive via USB countless number of bytes.

\n", "Title": "What is the maximum Packet Size of a LoraWan including the phy headers", "Tags": "|lora|lorawan|", "Answer": "
\n

How big is a LoRa packet including PHY headers

\n
\n\n

I assume you mean MAC header? After some LoRa chip has demodulated the LoRa radio signals for you, it will give you the LoRa PHY payload. For a LoRaWAN uplink such PHY payload holds a MAC header, MAC payload and MIC.

\n\n

For 1.0.x the rule of thumb seems to be that a LoRaWAN packet is at least 13 bytes larger than the application payload:

\n\n
\n

I think usually at least 13 [ MHDR (1) + DevAddr (4) + FCtrl (1) + FCnt (2) + Fport(1) + MIC(4) ] in a packet with no options

\n
\n\n

The maximum application payload depends on the selected data rate. If a node should be able to operate in worst conditions, then one should assume the worst data rate, SF12, where the node should not send more than about 51 bytes. (Where in best conditions, SF7, that might be 222 bytes.) All that also depends on the region, I think. (And things might be better when the LoRaWAN node does not use LoRa, but FSK.)

\n\n

So, for your use case, I'd try not to depend on some maximum length through USB. Instead:

\n\n\n" }, { "Id": "2884", "CreationDate": "2018-04-30T05:40:06.303", "Body": "

I recently came across the EU General Data Protection Regulation (GDPR) meant for data privacy regulation. The final Version of GDPR was released December/2015.

\n\n

Any surveillance camera or devices that are known to be fully GDPR compliant?\nAre there compliance/certification programs that have been derived out of the EUGDPR guideline?

\n\n

I am not able to understand how the EU may be planning to ensure/enforce GDPR is being taken for the devices being sold?

\n", "Title": "Are there any surveillance cameras/devices that are EU General Data Protection Regulation (GDPR) compliant?", "Tags": "|security|surveillance-cameras|", "Answer": "

Adding to Simon's answer, the GDPR is more about processes than IT systems if you look at it in more detail. Sure, there are some technical things in there (mostly encryption and pseudonymization) but for the bigger part it dictates what you are allowed to do with data at all. Keep in mind for everything following, that I'm not a lawyer\u2014just someone who's some experience making his stuff GDPR ready.

\n
\n

TL;DR: Devices itself cannot be compliant or non-compliant on themselves. Cameras can be tricky though. Regarding your certification question, the answer seems to be not yet.

\n
\n

Some general things about GDPR

\n

First of all it defines a specific opt-in mechanism for customer's data. This means as data processing legal entity you'll need to keep a record of the customer giving that consent and that consent has to specify which data you'll use and what purposes you are using it for. See Wiki sections 2.4 & 2.5.

\n

Secondly it gives the user the expressed right to get all the data that the company has stored about him or her. That means every data in every system that can be tied to the specific user has to be provided. You can imagine that in bigger companies where all sorts of systems hold data that's somehow connected to a user that is kind of a hassle. I guess Netflix has some fun with that if you look at their systems:

\n

\"Netflix\nSource (Slide 12)

\n

The following articles go on to give the right to rectify incorrect data and erase it completely\u2014of course only if no other law requires you to keep the data (e.g. tax or auditing laws for bills).

\n

Of course, there are tons of pitfalls in the other forty-something articles of the first fifty that define your responsibilities and consumer rights that I haven't even mentioned. Like not being able to put the data anywhere where the EU standards aren't met\u2014which if you read the thing seems to be everywhere, certainly not the US.

\n

Camera considerations

\n

Another interesting thing that might affect a device is article 32\u2014"security of processing"\u2014which is a bit fuzzy but if your camera is in front of a medical building it might be argued that you need end-to-end encryption with encryption of all data at rest on the camera too.

\n
\n

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

\n

(a) the pseudonymisation and encryption of personal data;

\n

(b) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

\n

(c) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

\n

(d) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

\n
\n

Since after EU law the picture of a person is (thankfully) considered personal data you are likely required to either pseudonymize or encrypt the pictures or video.

\n

On Certifications

\n

I haven't been able to find any that are based on the regulation itself. There's of course a bunch of people who sell you GDPR-sounding "certifications" but none (that I could find) seem to fulfill the requirements of the regulation as of today.

\n" }, { "Id": "2888", "CreationDate": "2018-04-30T21:57:54.890", "Body": "

I'm trying to calculate the time elapsed between two events using the CCP on PIC18F4520. The events are triggered by two sensors that are on the ccp1 and ccp2 ports. I've assigned a prescalar of 8 to the T1CON and I'm using a crystal of 16 MHz. (Duration of one clock cycle is 2e-6 s)\nSince the maximum time I can measure before overflowing is just 0.13 seconds (65536 x 2e-6) and the event I'm measuring lasts for around 0.32 seconds I decided to count the number of times TMR1IF overflows and then multiply this by 65536 and add this to the captured CCPR1H:CCPR1L value.

\n\n

However, I'm unable to count beyond one overflow of the TMR1IF register.

\n\n

Hope someone knows why!

\n\n

Below are the associated functions

\n\n
void ccp_Init(void)\n{\n    TRISCbits.RC2 = 1;\n    TRISCbits.RC1 = 1;\n    CCP1CON = 0x05;\n    CCP2CON = 0x05;\n    T3CON = 0x00;\n    PIE1bits.CCP1IE=1;\n    PIE2bits.CCP2IE=1;\n    T1CON = 0x30;\n    TMR1H = 0;\n    TMR1L = 0;\n    PIR1bits.CCP1IF = 0;\n    PIR2bits.CCP2IF = 0;\n    PIR1bits.TMR1IF = 0;\n}\n\nint ccp_get(void)\n{\n    int count = 0;\n    TMR1L = 0;\n    TMR1H = 0;\n    while(PIR2bits.CCP2IF == 0);\n    T1CONbits.TMR1ON = 1;\n    while (PIR1bits.CCP1IF == 0);\n    {\n        if (PIR1bits.TMR1IF == 1)\n        {\n            PIR1bits.TMR1IF = 0;\n            TMR1L = 0;\n            TMR1H = 0;\n            count = count+1;\n            T1CONbits.TMR1ON = 1;\n        }\n    }\n    T1CONbits.TMR1ON = 0;\n    return count;\n}\n
\n\n

From the above code int ccp_get() returns the number of times TMR1IF has overflowed and which isn't going beyond 1.

\n", "Title": "Finding the time interval between two events using CCP on PIC18f4520", "Tags": "|microcontrollers|pic|", "Answer": "

This loop looks suspicious, try removing the semicolon on first line:

\n\n
while (PIR1bits.CCP1IF == 0);\n{\n    if (PIR1bits.TMR1IF == 1)\n    {\n        PIR1bits.TMR1IF = 0;\n        TMR1L = 0;\n        TMR1H = 0;\n        count = count+1;\n        T1CONbits.TMR1ON = 1;\n    }\n}\n
\n\n

Overall, I suggest not to clear the timer counter. You could possibly miss some interrupts. Overflow happens after roll over, so the value would be already reset. See Datasheet on page 130 it says:

\n\n
\n

The TMR1 register pair (TMR1H:TMR1L) increments from 0000h to FFFFh\n and rolls over to 0000h. The Timer1 interrupt, if enabled, is\n generated on overflow, which is latched in interrupt flag bit, TMR1IF\n (PIR1<0>). This interrupt can be enabled or disabled by setting or\n clearing the Timer1 Interrupt Enable bit, TMR1IE (PIE1<0>).

\n
\n" }, { "Id": "2889", "CreationDate": "2018-05-01T12:05:27.010", "Body": "

I'm building a Raspberry Pi-based device for backyard gardeners that has a web page and access point for the initial configuration, including the Wi-Fi configuration. The connection uses WPA2 and the only two devices on that internal network would be the device itself and the user's phone/tablet/laptop. The access point is only visible during configuration which reduces the likelihood of outside attackers being able to guess the random, factory-shipped password. So I have encrypted traffic, almost certainly only two nodes, for a short time, and a random password. Thus there is no need for HTTPS that I can see, and I had planned to run HTTP.

\n\n

However, today I learned that starting in July Chrome will begin marking all HTTP sites as insecure.[1] But because the Wi-Fi configuration will be done by access point, no internet access is available yet to verify TLS certificates, which I understand is necessary for proper operation.[2] I could self-sign the cert, but that presents other problems.[3]

\n\n

So my options seem to be:

\n\n\n\n

How would you provide that lovely green lock by default for a device configuration page?

\n\n

[1] https://www.theverge.com/2018/2/8/16991254/chrome-not-secure-marked-http-encryption-ssl

\n\n

[2] https://security.stackexchange.com/questions/56389/ssl-certificate-framework-101-how-does-the-browser-actually-verify-the-validity?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa

\n\n

[3] https://www.globalsign.com/en/ssl-information-center/dangers-self-signed-certificates/

\n", "Title": "How would I use HTTPS on the web-based device config running an access point and no internet access?", "Tags": "|https|tls|", "Answer": "

One possible option is to use HTTPS and ship a real certificate on the device:

\n\n

Since you control the access point you presumably control the DHCP server on the access point, so you can have it provide a DNS server address at the same time.

\n\n

This DNS server can be on the AP and can resolve a fully qualified hostname to point to it's self.

\n\n

You can then purchase a certificate for that fully qualified domain name and bundle this with the product to create a fully verified HTTPS connection.

\n\n

One big problem with this idea is that you are shipping the private key and cert for that domain name, so you should assume it will be compromised at some point so you should never put a real machine (You may need to run a machine with this name for a very short time to actually get the certificate) on the internet that uses that host name as attackers would be able to easily spoof it.

\n\n

Also the firmware for the AP would have a limited life as the certificate will expires (probably after a year by default iirc) then you would get nasty certificate expired warnings.

\n\n

Next Idea:

\n\n

Ditch WiFi Access Point mode and use Bluetooth e.g.

\n\n

https://www.hardill.me.uk/wordpress/2016/09/13/provisioning-wifi-iot-devices/

\n\n

Downside is that Apple doesn't currently support WebBluetooth, but Chrome on Windows/Linux/Mac does and you could ship a native iOS app for Apple phone/tablet users.

\n" }, { "Id": "2892", "CreationDate": "2018-05-02T09:38:13.870", "Body": "

Reading some papers about IoT and Wireless Sensor Networks (WSN) I came across mote devices and have seen there are lots of them.

\n\n

I have understood that they are the leaves in an IoT system and that they are embedded devices equipped with sensors and optimized connectivity architecture but it is not clear to me what they are.

\n\n

For example is the difference between a mote device and a micro controller such as Arduino Uno only that a mote device is already equipped with sensors and for connectivity?

\n\n

Plus I have seen many of them support their own operating system such as TinyOS but looking at its wiki page it is not updated since 2012: are mote devices still a good option nowadays? The fact that many of them are programmed in their own programming languages (such as NesC for TinyOS) and not in C/C++ does not lead to interface problems with other devices?

\n", "Title": "What is difference between a mote and a microcontroller equipped with sensors and connectivity?", "Tags": "|sensors|hardware|microcontrollers|wireless|", "Answer": "

I loved this question when I read it. \"It takes me back\", as the greybeards say :) TinyOS \"went public\" in 2000 - about a year after the phrase \"Internet of Things\" was coined, according to Wikipedia. A long time ago, in a galaxy far, far away... OK, down to business:

\n\n

I believe the answer to your question as to whether or not motes, TinyOS, NesC, etc are \"good options nowadays\", is an unqualified \"Yes\". I'll explain why.

\n\n

I learned of TinyOS in 2003; it was already a fairly mature system by then, and being used in some interesting applications. \"Motes\" is a term for the hardware, as in \"remote sensor\". Each mote had a processor, a battery, a radio (not WiFi) and some sort of a sensor. The first three components were common across a variety of motes, whereas the sensor was generally peculiar to the application; light, heat, magnetic fields, etc. If you're interested in details, numerous papers (mostly academic and wordy) have been published that document the design of TinyOS... here's one I like.

\n\n

As a system, TinyOS and the mote were designed to accomplish an objective with extremely scant resources. For example:

\n\n\n\n

Delivery of sensor data to its ultimate destination from broadly-dispersed motes that might be dropped from an aircraft, free-fall style, into an extremely hostile operating environment demanded clever routing algorithms. \"Flexibility\" was thus the key driver in design of TinyOS' communication stack. Consequently, no existing communication infrastructure is needed. This is of course both empowering and challenging. A number of routing protocols were developed, and the open-source licensing encouraged adoption and modification of these protocols.

\n\n

As far as TinyOS being abandoned, or stagnant, I don't feel that's the case. The TinyOS GitHub repo shows recent activity, and suggests that it's being maintained and cared after. That said, TinyOS was never going to attract the \"electronics-and-software-as-a-hobby\" crowd; a crowd that didn't really exist until recently when Arduino and Raspberry Pi became popular.

\n\n

And that brings me to the point in this elaborate \"answer\" to your thought-provoking question. I don't think there is a cut-and-dried, matter-of-fact answer. I think the answer comes down to this: We humans are more like sheep or lemmings than we like to believe. Raspberry Pi, Arduino, etc. are products that have attracted large followings of the curious and revenue for those who traffic in gadgets, but that has little or nothing to do with their suitability for a particular application. I'm not suggesting that one re-invent the wheel for each new problem, but at the same time, one (or two) size(s) do not fit all. Use the right tool for the job.

\n\n

I know from your question that you understand this, but perhaps haven't thought of it in this way. Frankly, neither had I until your question jarred some loose rocks. So yes, I think you can still build some very elegant things with TinyOS, but you may have to do it with fewer support resources. Or, maybe there will be a \"TinyOS Stack Exchange\" in the future? Ha ha - don't hold your breath :)

\n\n

I'll close with this: \u201cThe truth is often what we make of it; you heard what you wanted to hear, believed what you wanted to believe.\u201d

\n\n

Addendum:

\n\n

As you think about how to build your devices, and aggregate them into systems, Phil Levis offers some food for thought in this brief video.

\n\n

And as far as resources to support TinyOS development, here are a few that I found while researching my \"answer\" here:

\n\n\n" }, { "Id": "2897", "CreationDate": "2018-05-04T05:31:19.743", "Body": "

I am currently trying to write a few bytes from the PN532 to NTAG213's user-programmable memory and read back those bytes as well. The write and read happens when the PN532 detects the NTAG213 within proximity. I have ported the necessary functions from the Adafruits library here for the ATxmega32A4U. To scan and read an NTAG213 card, I am using the below functions.

\n\n
// Reads cards, beep and long light flash for good card read, short light flash for no card read, lights mostly on for NFC chip not talking right\nvoid TestNFC_ReadTag (void) {\n    bool success;\n    uint8_t uid[] = { 0, 0, 0, 0, 0, 0, 0 };  // Buffer to store the returned UID\n    uint8_t uidLength;                        // Length of the UID (4 or 7 bytes depending on ISO14443A card type)\n    \n    /************************************ Everytime Try Reading a Card ***********************************/\n    // Check for a good card read\n    success = PN532_readPassiveTargetID(PN532_MIFARE_ISO14443A, &uid[0], &uidLength, 1000);\n    // If card read is good then beep and long flash of main lights\n    if (success) {\n        LIGHT_ON();\n        EventTimerCreate(200,1,TestingLightsOff);\n        for (int nfcBeep = 0; nfcBeep < 300; nfcBeep++) {\n            DACB.CH0DATA = 0x700;\n            _delay_us(200);\n            DACB.CH0DATA = 0x900;\n            _delay_us(200);\n        }\n        DACB.CH0DATA = 0x800;\n        _delay_ms(500);         // Reduced from 1000ms to 500ms to avoid WDT reset - Vinay\n    }\n    // If valid card read then short light flash no beep\n    else {\n        LIGHT_ON();\n        _delay_ms(10);\n        LIGHT_OFF();\n    }\n}\n\n/**************************************************************************/\n/*!\nWaits for an ISO14443A target to enter the field\n\n@param  cardBaudRate  Baud rate of the card\n@param  uid           Pointer to the array that will be populated\nwith the card's UID (up to 7 bytes)\n@param  uidLength     Pointer to the variable that will hold the\nlength of the card's UID.\n\n@returns 1 if everything executed properly, 0 for an error\n*/\n/**************************************************************************/\nbool PN532_readPassiveTargetID(uint8_t cardbaudrate, uint8_t *uid, uint8_t *uidLength, uint16_t timeout)\n{\n    // read data packet\n    if (PN532_readResponse(pn532_packetbuffer, sizeof(pn532_packetbuffer), timeout) < 0) {\n        return 0x0;\n    }\n    \n    // check some basic stuff\n    /* ISO14443A card response should be in the following format:\n\n    byte            Description\n    -------------   ------------------------------------------\n    b0              Tags Found\n    b1              Tag Number (only one used in this example)\n    b2..3           SENS_RES\n    b4              SEL_RES\n    b5              NFCID Length\n    b6..NFCIDLen    NFCID\n    */\n\n    if (pn532_packetbuffer[0] != 1)\n    return 0;\n\n    uint16_t sens_res = pn532_packetbuffer[2];\n    sens_res <<= 8;\n    sens_res |= pn532_packetbuffer[3];\n\n    DMSG("ATQA: 0x");  DMSG_HEX(sens_res);\n    DMSG("SAK: 0x");  DMSG_HEX(pn532_packetbuffer[4]);\n    DMSG("\\r\\n");\n\n    /* Card appears to be Mifare Classic */\n    *uidLength = pn532_packetbuffer[5];\n\n    for (uint8_t i = 0; i < pn532_packetbuffer[5]; i++) {\n        uid[i] = pn532_packetbuffer[6 + i];\n    }\n\n    return 1;\n}\n/**************************************************************************/\n/*!\n@brief  Starts the scan\n\n@returns 1 if everything executed properly, 0 for an error\n*/\n/**************************************************************************/\nuint8_t PN532_StartScan(void)\n{\n    uint8_t pn532_packetbuffer[2];\n    uint8_t ret = 0;\n    \n    pn532_packetbuffer[0] = PN532_COMMAND_INLISTPASSIVETARGET;\n    pn532_packetbuffer[1] = 1;\n    pn532_packetbuffer[2] = PN532_MIFARE_ISO14443A;\n    \n    /* Disable the interrupt */\n    PORTA.INT1MASK &= ~(1<<PIN7_bp);\n    \n    /* Send scan command and read acknowledgments */\n    if (PN532_writeCommand(pn532_packetbuffer, 3, NULL, 0))\n        ret = 0x0;  // command failed\n    else\n        ret = 0x01;\n    \n    /* Enable the interrupt */\n    PORTA.INT1MASK |= PIN7_bm;\n    \n    return(ret);\n}\n
\n

To write to and read from the NTAG213's memory. I use the below functions.

\n
/**************************************************************************/\n/*!\n    Tries to write an entire 4-bytes data buffer at the specified page\n    address.\n\n    @param  page     The page number to write into.  (0x04..0x27).\n    @param  buffer   The byte array that contains the data to write.\n\n    @returns 1 if everything executed properly, 0 for an error\n*/\n/**************************************************************************/\nuint8_t PN532_writePage (uint8_t page, uint8_t *buffer)\n{\n     if ((page < NTAG_PAGE_START_ADDRESS) || (page > NTAG_PAGE_END_ADDRESS)) {\n         DMSG("Page value out of range\\n");\n         return 0;\n     }\n    \n    /* Prepare the first command */\n    pn532_packetbuffer[0] = PN532_COMMAND_INDATAEXCHANGE;               /* Card number */\n    pn532_packetbuffer[1] = 1;                                          \n    pn532_packetbuffer[2] = NTAG_CMD_WRITE_USER_EEPROM;                 /* NTAG213 Write cmd = 0xA2 */\n    pn532_packetbuffer[3] = page;                                       /* page Number (0x04..0x27) */\n    memcpy (pn532_packetbuffer + 4, buffer, 4);                         /* Data Payload */\n    \n    /* Send command to write to NTAG 213 EEPROM */\n    if (PN532_writeCommand(pn532_packetbuffer, 8, NULL, 0))\n        return 0;\n\n    /* Read the response packet */\n    if(PN532_readResponse(pn532_packetbuffer, sizeof(pn532_packetbuffer), 100) < 0)\n        return 0;\n\n    return 1;\n}\n\n/**************************************************************************/\n/*!\n    Tries to read an entire 4-bytes page at the specified address.\n\n    @param  page        The page number ((0x04..0x27 in most cases)\n    @param  buffer      Pointer to the byte array that will hold the\n                        retrieved data (if any)\n*/\n/**************************************************************************/\nuint8_t PN532_readPage (uint8_t page, uint8_t *buffer)\n{\n    if ((page < NTAG_PAGE_START_ADDRESS) || (page > NTAG_PAGE_END_ADDRESS)) {\n        DMSG("Page value out of range\\n");\n        return 0;\n    }\n    \n    /* Prepare the command */\n    pn532_packetbuffer[0] = PN532_COMMAND_INDATAEXCHANGE;\n    pn532_packetbuffer[1] = 1;                                  /* Card number */\n    pn532_packetbuffer[2] = NTAG_CMD_READ_USER_EEPROM;          /* NTAG Read command = 0x30 */\n    pn532_packetbuffer[3] = page;                               /* page Number (0x04..0x27) */\n\n    /* Send the command */\n    if (PN532_writeCommand(pn532_packetbuffer, 4, NULL, 0))\n        return 0;\n\n     /* Read the response packet */\n     //if(PN532_readResponse(pn532_packetbuffer, sizeof(pn532_packetbuffer), 100))\n     if(PN532_readResponse(pn532_packetbuffer, 20, 100) < 0)\n        return 0;\n        \n    #ifdef PN532_PGREAD_DBG\n        printf("0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x 0x%02x\\r\\n",\n        pn532_packetbuffer[0], pn532_packetbuffer[1], pn532_packetbuffer[2], pn532_packetbuffer[3],\n        pn532_packetbuffer[4], pn532_packetbuffer[5], pn532_packetbuffer[6], pn532_packetbuffer[7],\n        pn532_packetbuffer[8], pn532_packetbuffer[9], pn532_packetbuffer[10], pn532_packetbuffer[11],\n        pn532_packetbuffer[12], pn532_packetbuffer[13], pn532_packetbuffer[14], pn532_packetbuffer[15],\n        pn532_packetbuffer[16], pn532_packetbuffer[17], pn532_packetbuffer[18]);\n    #endif\n\n    /* If the status isn't 0x00 we probably have an error */\n    if (pn532_packetbuffer[0] == 0x00) {\n        /* Copy the 4 data bytes to the output buffer         */\n        /* Block content starts at byte 1 of a valid response */\n        /* Note that the command actually reads 16 bytes or 4  */\n        /* pages at a time ... we simply discard the last 12  */\n        /* bytes                                              */\n        memcpy (buffer, pn532_packetbuffer + 1, 4);\n    } \n    else {\n        return 0;\n    }\n    \n    // Return OK signal\n    return 1;\n}\n
\n

I was performing some tests and came across some issues when tried to read from the NTAG.

\n

The below test code works meaning: The PN532 is able to read the card as well as write and read back the user data from the NTAG.

\n
 void TestNFCInterrupt_ReadTag (void){\n    static uint8_t flag  = 0;\n    \n    /* Test code. Always perform read or write prior to getting the uuid */\n    if(flag == 0){\n        testPN532WritePage();\n        flag = 1;\n    }\n    else{\n        testPN532ReadPage();\n        flag = 0;\n    }\n    \n    /* Needs to restart the scan to detect the NTAG passively */\n    PN532_StartScan();\n    TestNFC_ReadTag();\n    PN532_StartScan();\n    \n}\n
\n

But surprisingly, for the below code, the card is detected, the write works, but when next the read happens after the card is detected, I get a status error response as 0x27 which indicates that the command is not acceptable due to the current context of the PN532, this can be seen on page 68 of the PN532 user manual.

\n
void TestNFCInterrupt_ReadTag (void){\n    static uint8_t flag  = 0;\n    \n    TestNFC_ReadTag();\n\n    /* Test code. Always perform read or write prior to getting the uuid */\n    if(flag == 0){\n        testPN532WritePage();\n        flag = 1;\n    }\n    else{\n        testPN532ReadPage();\n        flag = 0;\n    }\n    \n    /* Needs to restart the scan to detect the NTAG passively */\n    PN532_StartScan();\n}\n
\n

Also, here is the NTAG213 manual. Apologies for the exhaustive post, but taught the above information is necessary to seek help. I'd appreciate if anyone could take some time to check the code or suggest anything.

\n", "Title": "Unable to read the memory from NTAG213 via PN532", "Tags": "|nfc|", "Answer": "

After a while of debugging and going through the code, I got to know that the problem was happening due to the Audio beeps on detection/reading a new card within the void TestNFC_ReadTag (void) function which was causing some intermediate delay between the time the UUID was read and the subsequent call to the read/write page API. So to write to or read from the NTAGs user memory, it is always necessary to issue the INDATAEXCHANGE command and invoke the read/write page immediately after the tag is read. So, I made some slight changes to the void TestNFC_ReadTag (void) function to meet this requirement and it works. The below is the changes I made:

\n\n
/* Made these as static global variables */\nuint8_t uid[] = { 0, 0, 0, 0, 0, 0, 0 };  // Buffer to store the returned UID\nuint8_t uidLength;                        // Length of the UID (4 or 7 bytes depending on ISO14443A card type)\n\n\n\n// Reads cards, beep and long light flash for good card read, short light flash for no card read, lights mostly on for NFC chip not talking right\nvoid TestNFC_ReadTag (void) {\n    bool success;\n    static flag = 0;\n\n    /************************************ Everytime Try Reading a Card ***********************************/\n    // Check for a good card read\n    success = PN532_readPassiveTargetID(PN532_MIFARE_ISO14443A, &uid[0], &uidLength, 1000);\n    \n    if (success) {\n\n        if(flag == 0){\n           testPN532WritePage();\n           flag = 1;\n        }\n        else{\n           testPN532ReadPage();\n           flag = 0;\n     }\n\n        LIGHT_ON();\n        EventTimerCreate(200,1,TestingLightsOff);\n        for (int nfcBeep = 0; nfcBeep < 300; nfcBeep++) {\n            DACB.CH0DATA = 0x700;\n            _delay_us(200);\n            DACB.CH0DATA = 0x900;\n            _delay_us(200);\n        }\n        DACB.CH0DATA = 0x800;\n        _delay_ms(500);         // Reduced from 1000ms to 500ms to avoid WDT reset - Vinay\n    }\n    // If Invalid card read then short light flash no beep\n    else {\n        LIGHT_ON();\n        _delay_ms(10);\n        LIGHT_OFF();\n    }\n}\n\nvoid TestNFCInterrupt_ReadTag (void){\n\n    /* Read the tag and write/read to NTAG213 user memory */\n    TestNFC_ReadTag();\n\n    /* Needs to restart the scan to detect the NTAG passively */\n    PN532_StartScan();\n}\n
\n

I hope this helps!

\n" }, { "Id": "2906", "CreationDate": "2018-05-07T09:33:21.350", "Body": "

The Zigbee hardware looks quite simple to produce1, why are they so expensive?

\n\n

For comparison I can get a 855 Mhz wireless module for 2-3\u20ac but I can't find Zigbee modules under 15\u20ac.

\n\n

[1]\"Xbee

\n", "Title": "Why are Zigbee hardware modules so expensive?", "Tags": "|hardware|wireless|zigbee|", "Answer": "

I know it's pretty old thread, but still.. Have you heard about:

\n\n

http://zboss.dsr-wireless.com/

\n\n

or

\n\n

http://www.ti.com/tool/Z-STACK

\n\n

So.. it is possible to get the stack for free and to get cheap HW, e.g. CC2531..\nOnly problem I can see is that it is not that easy to compile (IAR license necessary) and even more complicated to grasp the whole zigbee specification and concept to be able to implement own device..

\n" }, { "Id": "2924", "CreationDate": "2018-05-10T02:03:37.653", "Body": "

I have been developing a Wi-Fi remote access IoT device for an electrical control to display at an engineering trade show coming up. Unfortunately I was unable to attend this trade show due to health reasons

\n\n

The problem is that my fellow employees were unable to successfully connect to the IoT device in soft access point mode. They reported seeing it in their Wi-Fi devices and being able to connect to it, but then having the Wi-Fi connection drop out.

\n\n

Being a engineering trade show I am guessing that there would be lots of IoT devices and sometimes in large crowds I have experience my mobile phone network go down. Could other IoT devices be fighting for the Wi-Fi channels and hijacking our connection? I\u2019m confused as I never experience this in the workspace environment while developing the firmware.

\n", "Title": "IoT device's soft access point module dropping out at trade show", "Tags": "|wifi|", "Answer": "

Having been to many a trade show, there are a few things to set when on the expo floor:

\n\n
    \n
  1. Hide the AP name. Most devices won't try and connect if they can't see the AP name.
  2. \n
  3. Look for a channel that is least used and set the AP to use it.
  4. \n
  5. connect all you can via wired Ethernet to minimize the number of WiFi nodes trying to connect to the AP. We had the same issues you had at an Intel show, until we hard-wired the MQTT Broker to the AP...then things worked great.
  6. \n
  7. turn down the transmit power...if all they are connecting to is in your booth, then cut the Transmit power by half....if you have an AP that can do that. I always bring along a DD-WRT AP node that has the ability to inspect channels and modify transmit power.
  8. \n
\n" }, { "Id": "2927", "CreationDate": "2018-05-10T08:33:18.910", "Body": "

I'm here asking some support for a newbie in embedded communication.

\n\n

My situation is the following:

\n\n\n\n

My goal: create a connection between the embedded system and a PC via WiFi, having both PC and device on the same LAN, in order to allow the PC to send files to the device, and send feedbacks from the device back to the PC.

\n\n

The problem is that I do not know basically how to approach the problem, because is my first project that relies on wifi connectivity. \nThe main thing I think I'd should do will be define a custom communication protocol, in order to let the two actors talk to each other. \nThe flow, in my opinion, should be:

\n\n
    \n
  1. set up a stable connection (UDP or TCP, I have to determine also\nthis) between the two actors
  2. \n
  3. on the PC side, create an application that sends the data to the device
  4. \n
  5. device side, write an handler for the incoming packets and manage the arrival of them
  6. \n
  7. device side, send the feedback to the PC once the data are arrived safely(check the correctness via checksum).
  8. \n
\n\n

Do you guys think that the above could be a good starting point? \nDo you have any documentation online to share with me that could clarify my doubts or give me some starting point to follow?

\n", "Title": "WiFi communication between PC and embedded system on LAN", "Tags": "|wifi|microcontrollers|protocols|", "Answer": "

Ok guys, so thank you for the comments, I've managed to solve the issue by using MQTT for feedback data as mentioned before by @Sean. problem solved!

\n" }, { "Id": "2935", "CreationDate": "2018-05-11T08:35:34.630", "Body": "

I would like to measure the air quality related to road traffic in a city for an IoT application.

\n\n

The European Environmental Agency has defined quality indicators (good, bad, etc) according to the concentration of some gases in the air such as CO, NOx etc. I want to measure the concentration in the air of these gases.

\n\n

I have come around the Waspmote board by Libellium which produces also some sensors boards. However the gas sensors, particularly the calibrated ones are really expensive.\nDoes anyone know a more affordable solution or integrated product to measure air quality?

\n", "Title": "What are affordable solutions to air quality monitoring system?", "Tags": "|hardware|sensors|", "Answer": "

I think it's non-existent today.

\n\n

I work at a Dutch municipality and, though things might have changed now, in December 2017 our environmental department concluded that even semi-professional gas or dust sensors were just not good enough to determine the low levels needed for outdoor measurements. (Including everything Libelium offered at that time for CO, NOx and particulate matter.)

\n\n

It's not my cup of tea, but the reasoning of our environmental department:

\n\n\n\n

Of course, combining a lot of low-cost measurements and comparing those to the results of (very expensive) professional measurements might still yield useful indicators.

\n\n
\n\n

\u2020 \u00b5g/m3 = ppb \u00d7 12.187 \u00d7 M / T, where \u00b5g/m3 is micrograms of gaseous pollutant per cubic meter of ambient air, ppb is parts per billion by volume (i.e., volume of gaseous pollutant per 109 volumes of ambient air), M is the molecular weight of the gaseous pollutant, and T is the temperature in Kelvin. An atmospheric pressure of 1 atmosphere is assumed.

\n\n

So, for a temperature of 25 \u00b0C: 1 ppb SO2 = 2.62 \u03bcg/m3; NO2 = 1.88 \u03bcg/m3; NO = 1.25 \u03bcg/m3; O3 = 2.00 \u03bcg/m3; CO = 1.145 \u03bcg/m3; Benzene = 3.19 \u03bcg/m3.

\n\n

Source: DCE - Danish Centre For Environment And Energy.

\n" }, { "Id": "2939", "CreationDate": "2018-05-11T22:56:00.423", "Body": "

I use the following example of an ABP-activated LoRaWaN node:

\n

https://github.com/matthijskooijman/arduino-lmic/blob/master/examples/ttn-otaa/ttn-otaa.ino

\n

But for some reason I cannot figure out when the node transmits some data or not. As far I see from this snippet:

\n
LMIC_setTxData2(1, mydata, sizeof(mydata)-1, 0);\nSerial.println(F("Packet queued"));\n
\n

The packets are inserted into a queue and then transmitted when available. So how I will make the ABP-example to give me an indication when the dragino-shield/hopeRFM9x starts transmitting the physical package that is in the queue?

\n

I am asking because the single-channel-gateway for some reason pings the TTN that exists via Semtech UDP stat message and barely receives any message to transmit. So I want to debug it, therefore I need a more verbose log on when physical transmission started and ended of a queued packet. I occasionally I get some messages when I power off and power on my LoRaWAN node.

\n

Also I may not get any message at all on my LoRaWaN single channel gateway even if I run the setup over 30mins with the Arduino being the only one node on my application and the node be next to my packet forwarder.

\n

Over my gateway I used the instructions in: http://www.instructables.com/id/Use-Lora-Shield-and-RPi-to-Build-a-LoRaWAN-Gateway/

\n

For my packet forwarder I used the following implementation: https://github.com/tftelkamp/single_chan_pkt_fwd

\n

Also I have configured my TTN application to use the legacy Semtech packet forwarder.

\n", "Title": "Arduino LMIC: Figuring out when packet transmission has been started on Chip", "Tags": "|lora|lorawan|", "Answer": "

LMiC does not tell your code when transmission has started, but you could enable debug logging.

\n\n

When you're transmitting well below duty cycle limitations, then LMiC will often send almost immediately. It might postpone things:

\n\n\n\n

When a transmission and waiting for RX1 and RX2 (if applicable) have completed, you'll get the EV_TXCOMPLETE event. (But even then, it might still delay a next transmission for 3 seconds; see above.)

\n\n

Peeking at some old code I once used, it seems you could guess if it's sending right away by using:

\n\n
LMIC_setTxData2(1, mydata, sizeof(mydata)-1, 0);\n// At this point LMIC.txend is still the end time of the last packet?\nif(LMIC.txend < os_getTime()) {\n  Serial.print(F(\"Will send right away\"));\n}\nelse {\n  Serial.print(F(\"Packet queued\"));\n}\n
\n\n

(But: not tested right now.)

\n\n

That said, to debug your actual problems:

\n\n\n\n

And above all:

\n\n
\n

I use the following example of an abp-activated lowawan node:

\n \n

https://github.com/matthijskooijman/arduino-lmic/blob/master/examples/ttn-otaa/ttn-otaa.ino

\n
\n\n

That example is for OTAA, not for ABP. (And the single-channel test gateway you're using is deprecated and does not support downlinks, hence no OTAA either.)

\n" }, { "Id": "2948", "CreationDate": "2018-05-15T14:12:13.737", "Body": "

I'm looking at making a remote control for a custom IoT device using Bluetooth low energy (BLE) and I want the battery in the remote to last a very long time (primary battery, non rechargeable) so I don't want the remote to draw power (at all) except when it's used.

\n\n

My issue is that as a remote control Bluetooth usually takes a few seconds to pair once activated. Is there a way to mitigate that few seconds of lag? Everyone hates lag in a remote.

\n", "Title": "Rapid pairing of Bluetooth low energy interface", "Tags": "|bluetooth-low-energy|", "Answer": "

If two devices are already bonded and one of them is mains powered, you should be able to establish the connection in way less than 1 sec given that the mains powered device is constantly scanning or advertising, you're using directed advertisements, white listing, and low connection intervals in the first second.

\n\n

Another option is to keep the connection alive at all times while maintaining a low latency. The maximum recommended supervisory timeout is 15 seconds, this means that the Central must receive a packet from the Peripheral at least every 15 seconds or else it will drop the connection. On the other hand you don't want to have a connection interval of 15 seconds, and that is when you use Slave Latency, this enables the Peripheral ignore X amounts of connection interval without losing the connection to its Central. So with a Connection interval of 500ms and a slave latency of 30 you will be asleep (15000-1)/15000 = >99.99% of the time and have 500ms latency when you do want to communicate.

\n\n

I estimate <2\u00b5A average current to keep the connection alive. You will have to compare it with the average current consumption of an fast re-connection style, but I doubt that keeping the connection alive consumes more energy on average than re-connecting on demand.

\n" }, { "Id": "2949", "CreationDate": "2018-05-15T17:57:42.627", "Body": "

Is there any possible way to remote control my Samsung Smart TV (which has an infrared remote) via my Philips Hue system, which communicates over Zigbee?

\n\n

It seems like ORVIBO is producing such stuff, but I can't find any further information, nor a retailer in Germany. I would like to tell Alexa/Siri to \"Switch on my TV\", which then triggers another device to send the \"switch on\" signal via infrared. A Zigbee plug is not an option for me, as it would switch off the TV hard.

\n\n

Using Hue would be optimal as I use it for everything, but any idea of achieving this in any way would be nice.

\n", "Title": "Control Samsung Smart TV (Infrared Remote) via Hue / Zigbee?", "Tags": "|zigbee|samsung-smartthings|philips-hue|infrared|", "Answer": "

The Logitech Harmony Hub is a smart hub that can control devices via infrared (like your TV remote), and is compatible with Alexa. It doesn't work via ZigBee, though \u2014 it connects via Wi-Fi to your Echo instead.

\n\n

You can see if it's compatible with your device on the compatibility page; as of writing, Logitech claim that the hub is compatible with over 270,000 devices, so I suspect your TV should be supported.

\n\n

Alternatively, you could consider one of various other infrared skills and hubs compatible with Alexa, such as this skill which can use a phone with an IR blaster to send the necessary signals. I think you're less likely to find something specifically designed to work with the Hue's network for this purpose, and Wi-Fi based devices are a little more common. The Harmony is the one that is probably most well-known and compatble at the moment.

\n" }, { "Id": "2951", "CreationDate": "2018-05-16T05:37:20.040", "Body": "

I would like to host a web page from a RPi that has websocket controls that update in 'real time' such as a slider that transmits its value as you move it. I then want to broadcast the values to several ESP8266 modules (~10) running Arduino via Wi-Fi. I would like to have a payload data rate of ~10 bytes/packet x 30Hz = 300 Bytes/s.

\n\n

What type of connection should I use for the Pi to the ESP8266s? I think MQTT is too slow for this?

\n", "Title": "Network type for streaming from RPi to several ESP8266 modules", "Tags": "|mqtt|raspberry-pi|esp8266|streaming|web-sockets|", "Answer": "

MQTT should be more than fast enough for your architecture, given a decent WiFi network. I run about 30 sensors (ESP8266, Feather MO, Arduino Uno, etc.) all using MQTT back to a Mosquitto Broker running as a Docker Container on an 15yo laptop, which connects back out to my control software and displays, and it all works just fine. I'm pushing close to 2 million MQTT packets a day at my house...so your 864k seems do-able to me :)

\n" }, { "Id": "2958", "CreationDate": "2018-05-17T13:38:02.313", "Body": "

I have an mqtt broker installed on RPi.\nI want to send data meseaured from a sensor attached to Arduino to the broker in the RPi.\nFor this I am using a sim808.\nI know that mqtt works in LAN how can I set a LAN between the RPi and the sim808

\n", "Title": "MQTT protocol between RPi and sim808", "Tags": "|mqtt|", "Answer": "

The Sim808 is a cellular GPRS data device (https://www.adafruit.com/product/2637), so getting data from it to a Raspberry Pi will depend on limits your wireless carrier has on the service that you have to sign up for. The data will have to travel from the Sim808, to the wireless carrier's data service that the Sim808 is connected to, out to the Internet, then to a publicly addressable node on your network (should be a Firewall) that is connected to the Internet, and then over to your Raspberry Pi MQTT broker. This could be a very expensive way to get data to your MQTT broker, especially if both devices are relatively close. Have you thought about using WiFi? If your sensor is going to be sending data at a high rate, you could easily go over your wireless service's data plan, and end up with a very large bill at the end of the month :(

\n" }, { "Id": "2977", "CreationDate": "2018-05-23T05:55:57.890", "Body": "

I'm looking for a door bell with a camera I would prefer something like 180\u00b0 fish eye with 4k, but I guess there is nothing on the market so 160\u00b0 with FullHD should work too. Should I mention that I want night vision too? ;)

\n\n

I checked NEST Hello, the ring.com door bells and August Doorbell. They are almost perfect, but they all rely on the fact that the video signal cannot been stored locally without an cloud and expensive subscriptions.

\n\n

Is there something which I may have missed?

\n", "Title": "Door bell with Wi-Fi camera", "Tags": "|smart-home|surveillance-cameras|door-sensor|", "Answer": "

There are few models available in market with 180 degree capability including:

\n\n

Sky Bell HD

\n\n

Ring Video Doorbell

\n\n

Vivint Doorbell Camera

\n\n

But these are not having 4K quality on video. Please refer below site for more details:

\n\n

http://mobilesiri.com/best-wireless-doorbell-camera/

\n" }, { "Id": "2986", "CreationDate": "2018-05-23T22:13:54.690", "Body": "

I am trying to connect a IoT node to a IBM IoT server via MQTT/GSM. I am using a Sim800L GSM module, and a serial USB device to send the commands to module from my computer. Later on, I will use the commands on Arduino to register my device.\nI'm sending following command to GSM Module:

\n\n
AT+CIPSTART=\"TCP\",\"[my-org-id].messaging.internetofthings.ibmcloud.com\",\"my-port\"\n
\n\n

And I get the following output

\n\n
CONNECT OK\n
\n\n

But for connecting my device to BlueMix, I need to submit username and password. There isn't any documentation that I find on doing that, so i tried that:

\n\n
AT+CIPSEND\n>usernamepassword\n\nSEND OK\n
\n\n

And after a few seconds after sending that:

\n\n
CONNECTION CLOSED\n
\n\n

So I think that this is not a valid way to submitting user and password. But I couldn't find any documentation about authentication on MQTT with GSM.

\n", "Title": "Connecting a SIM800L to a IBM BlueMix Server via MQTT", "Tags": "|mqtt|gsm|authentication|", "Answer": "

I've found this library useful to solve issues with authentication. If you encountered same issues on GSM modules with Arduino, it may help you as well:

\n\n

https://github.com/elementzonline/SIM800_MQTT

\n" }, { "Id": "2993", "CreationDate": "2018-05-27T00:14:28.840", "Body": "

I wanted to use my Amazon Echo Dot to connect to my Raspberry Pi, and have it turn on switches wirelessly.

\n\n

What I have done: I have been able to use Makermusings's Fauxmo and a relay board to \"trick\" Alexa to think that the Raspberry Pi is a Wemo device, and switch things on and off. However, that limits one to set all of the desired devices around a single area that is close to the Raspberry Pi.

\n\n

I wanted to do the following:

\n\n
    \n
  1. Say command: \"Alexa, turn on lamp\"
  2. \n
  3. Alexa sends command to Raspberry Pi through Fauxmo
  4. \n
  5. Raspberry Pi turns on a switch far away that may be in the kitchen, but would use some sort of wireless switch
  6. \n
\n\n

Question: How can I control a switch wirelessly through Alexa/Raspberry Pi?

\n", "Title": "How to Use Alexa & Raspberry Pi switch things wirelessly?", "Tags": "|smart-home|alexa|raspberry-pi|wireless|wemo|", "Answer": "\n\n

You can also use something like Node-RED (comes pre installed in raspbian) and the node-red-contrib-alexa-home-skill (I wrote this node) to send the messages and have more control than just the actions that Wemo supports.

\n" }, { "Id": "2995", "CreationDate": "2018-05-27T20:16:42.887", "Body": "

I set up a Raspberry Pi 3b with Windows 10 IoT to try it out. In the past I had Linux running and I would just ssh connect to it. However when trying to ssh from powershell to the Windows 10 IoT it would fail with this error:

\n\n
Unable to negotiate with 10.155.41.47 port 22: no matching cipher found. Their offer: aes256-cbc,aes192-cbc,aes128-cbc\n
\n\n

I use this command:

\n\n
ssh administrator@10.155.41.47\n
\n\n

Is it just not possible to ssh from Windows 10 with Powershell to Windows 10 IoT?

\n", "Title": "SSH to Raspberry with Windows 10 IoT", "Tags": "|raspberry-pi|microsoft-windows-iot|", "Answer": "
    \n
  1. Download putty as need of 64 bit or 32 bit

  2. \n
  3. Open the command prompt at the Windows iot core OS

  4. \n
  5. Give \"ipconfig\" and note the ipv4 address.

  6. \n
  7. Now add your network with the raspberry pi board ip address

    \n\n
      \n
    • netsh method
    • \n
    • advanced settings method
    • \n
  8. \n
  9. After known the ip just open the putty and put your id there select ssh and open the putty

  10. \n
  11. Give the user name and password

  12. \n
  13. Now you can easily access the SSH service of the windows IOT core ...
  14. \n
\n" }, { "Id": "3002", "CreationDate": "2018-05-29T10:19:34.293", "Body": "

The IKEA Tradfri gateway comes with an Ethernet cable but everyone online talks about it being connected to a WiFi. I find teardowns of the hardware that it has WiFi chips and there is a \"connect\" button on it.

\n\n

But how to I actually connect to a WiFi? The connect button doesn't connect it with WPS (my first thought), the Tradfri app doesn't have any Wifi settings section - what am I doing wrong?

\n", "Title": "How to connect IKEA Tradfri Gateway to a WiFI?", "Tags": "|ikea-tradfri|", "Answer": "

The Tradfri gateway is not a WPS certified or a wi-fi certified product, it's a wired product to your router.

\n

The app is only usable when connected to your router's wifi. If you want to control devices outside of the wifi network, you must enaled Google Assistant or Amazon Alexa. Tradfri bulbs and remotes are also compatible Philips Hue or Smartthings, you can use the tradfri devices standalone on these 2 systems but without the gateway you won't receive firmware updates.

\n" }, { "Id": "3008", "CreationDate": "2018-05-30T08:00:09.587", "Body": "

I have set the acl_file property in /etc/mosquitto/mosquitto.conf to /home/ubuntu/my.acl. In the ACL file, I have allow_anonymous false to disallow connections from clients that do not have user name.

\n\n

From MQTT Lens, I tried connecting to this broker without passing user name. I don't see a connect message immediately in the log. However, when I delete the connection from MQTT Lens, I see the message as connected followed by a disconnect message.

\n\n
ubuntu@ip-172-31-42-207:/var/log/mosquitto$ tail -f mosquitto.log  \n1527666604: New connection from 183.xx.xx.xx on port 1883.\n1527666604: New client connected from 183.xx.xx.xx as lens_bWIkzTxFSc2BIIUQqqT35ipsiPV (c1, k120).\n1527666746: Client lens_bWIkzTxFSc2BIIUQqqT35ipsiPV disconnected.\n
\n\n

How can I confirm that allow_anonymous false actually denied the connection?

\n", "Title": "mosquitto - allow_anonymous false shows connected message", "Tags": "|mqtt|security|mosquitto|authentication|", "Answer": "

allow_anonymous false needs to be in the mosquitto.conf file not the acl file

\n\n

IIRC the logging is delayed because of a minor bug in some versions of mosquitto because you are not driving any other load through the broker

\n" }, { "Id": "3025", "CreationDate": "2018-06-03T17:55:08.577", "Body": "

I am new to LoRa technology and I was reading it's technical specifications on the lora-alliance.org website. In the document, the SF is always considered between 7-12. What is the reason behind not considering the values beyond this range, for say SF6 or SF13? Is it because of receiver sensitivity, or there is a mathematical reason behind it?

\n", "Title": "Spreading Factor of LoRa", "Tags": "|lora|lorawan|", "Answer": "

The reasons are purely technical.

\n\n

In a SX127x chips, the PHY header in explicit header mode is always 28 bits long and must fit into the first 8 symbols that are hard-codded at CR 4/8 redundancy. At SF7, 8 symbols at CR 4/8 encode exactly 28 bits. At SF6, only 24 bits, that's not enough. And if you can't send an explicit header, you can't send a variable-length frame, so no SF6 LoRaWAN frame.\nSX126x chips get their new framing engine from the SX128x but have to stay compatible with the established SX127x and SX130x base, so no SF6 LoRaWAN either.

\n\n

SF13 is possible but you would have to double the size of the FFT unit and buffers in the chip, increasing price, for a diminishing return. It was considered not worth it.

\n" }, { "Id": "3026", "CreationDate": "2018-06-04T22:44:08.303", "Body": "

I'm working on an IoT project that involves thousands of MQTT clients that are connected to a broker (mosquitto) via 4G/WiFi router/modem. Less than 10 clients are connected to the same router, and the routers come from different places (different cities).

\n\n

Right now we have very few clients and they are always-connected to the broker. I know from this discussion that there will be no problems even when they will grow up to 1000+ units.

\n\n

My question is about the traffic load on the 4G connection. The end-user is afraid about data consumption with all those \"channels\" opened and not used.\nAs far as I understand when there is no activity only the keep-alive packets are sent, though I cannot find this information for sure in the MQTT documentation.

\n\n

Can I assume the traffic when no packets are published is negligible?

\n", "Title": "MQTT always connected and 4G data load", "Tags": "|mqtt|", "Answer": "

The MQTT spec lists the details of the PINGREQ and PINGRESP packets which make up the keep-alive transaction.

\n\n

Each is just 2 bytes in size so a full keep-alive event uses 4 bytes in total. Since you can control how often keep-alive packets are sent for each client based on how quickly you need to know that connection has dropped you have full control how much data is used when no messages are actually being published.

\n\n

If you want to reduce the data load even more you could have a separate broker running before the 4G router that the 10 devices connect to which is then bridged to the central broker. This would reduce the number of keep-alive packets to 1 per 4G router rather than 1 per client. This has the advantage that the 10 local devices can continue to pass messages between each other if the link goes down and you can use retained messages/Last Will and Testement messages to track when individual clients go down.

\n" }, { "Id": "3034", "CreationDate": "2018-06-09T20:04:17.147", "Body": "

I have a small IoT project where I want a network enabled electrical relay. I do not need full network plug nor linux on it. It just should be small and cheap. I feel smart enough to write a small http server in C/C++ (if required) to change the state of the relay. I searched of cause the web and found the almost matching dublicate Cheap IoT microcontroller with PoE

\n\n

However this is much bigger then what I have in mind. I want a small board with 6 skews to mount the wires (2 or more replays would be fine too). I want to control a device for with less then 25V (I'm not sure but very likely 12V DC). However I do not want to write the TCP/IP stack by my own.

\n\n

Any ideas?

\n", "Title": "Cheap PoE board with TCP/IP support", "Tags": "|hardware|microcontrollers|ethernet|", "Answer": "

Well, I am a big fan of the Omega 2, which is not much bigger than a full size SIM card or SD card. It runs Linux, so there is your TCP/IP stack.

\n\n

\"enter

\n\n

It has a good spec and will set you back a whopping $13.

\n\n
\n
580MHz MIPS CPU\n128MB Memory\n32MB Storage\nUSB2.0 support\n2.4GHz b/g/n WiFi\n3.3V Operating Voltage\n18 GPIOs\nSupport for UART, I2C, SPI\nMicroSD slot\n
\n
\n\n

Just by coincidence, I received their newsletter about 20 minutes ago and it describes a PoE project.

\n\n

That will set you back another few bucks, so I will leave it to you to decide if it is \"cheap\". Hope this helps.

\n" }, { "Id": "3035", "CreationDate": "2018-06-09T20:06:57.663", "Body": "

So, as I understand the Z-Wave Plus backwards compatibility requirements cover communication between Z-Wave Plus and original Z-Wave devices. Personally, I specifically take \"devices\" as the IoT term for sensors and other IoT objects.

\n\n

My question is whether or not the backwards compatibility also applies to a Hub or controller (specifically the SmartThings Hub v1)?

\n\n

I found this post to be slightly confusing, stating 1) that you need the controller to be Z-Wave Plus, but 2) you won't get all of the advantages. Ultimately, I have not found a source definitively saying Yes or No to whether the v1 hub (original Z-Wave) can control a Z-Wave Plus device.

\n", "Title": "Z-Wave Plus Backwards Compatibility", "Tags": "|samsung-smartthings|zwave|", "Answer": "

Vesternet claim that:

\n\n
\n

The main consideration is the Z-Wave controller - if the controller is not Z-Wave Plus enabled then all devices added to that controller's network will default to acting as Z-Wave. This is because Z-Wave Plus is back-wards compatible with Z-Wave devices, when it is installed with Z-Wave devices it behaves just like a Z-Wave device as those existing devices have no way to communicate with it using Z-Wave Plus commands.

\n
\n\n

It's stated on the OpenHAB forums that Z-Wave Plus devices can still be controlled using their bindings for Z-Wave, which would indicate that you shouldn't lose any functionality.

\n\n

It seems that the only thing that you miss out on is the protocol improvements such as lower power usage and increased fault tolerance; actual functionality is unchanged even if you use an old Z-Wave controller.

\n" }, { "Id": "3041", "CreationDate": "2018-06-10T14:54:52.400", "Body": "

I can\u2019t find this despite lots of googling: I want to flip a switch in my living room and have every floor lamp turn on also. But I don\u2019t need a remote, I don\u2019t need internet connectivity, I don\u2019t need schedules, and I definitely don\u2019t want anything battery powered. \u201cJust\u201d one central classic switch which has multiple plugs slaved to it, so they operate with its status.

\n\n

In my imagination this would involve swapping the wall switch with something smart, and it and the main light would continue being connected to the outlet as normal. And then a couple slaved plugs that connect to normal wall sockets that turn the lamps on and off.

\n\n

What I\u2019m looking for is simplicity. No fancy colours, no remotes to recharge. Am I missing a google keyword? I\u2019m an IoT virgin, so far my findings are dominated by SmartThings and Hues and remotes and hubs, and it all seems too complicated for my needs. Any insights, suggestions, or gentle introductions welcome.

\n", "Title": "Do simple, wirelessly connected, but not battery powered switches exist?", "Tags": "|smart-lights|", "Answer": "

Insteon components can be used stand-alone without a hub.

\n\n

Purchase any combination of dimming plugin module, on/off plugin module, or duplex outlet (dual control-each outlet can be controlled separately or linked together).

\n\n

Replace a wall switch and link it all together manually. You will get exactly what you asked for, the ability turn on/off one switch and have multiple things turn on/off. You can also add many more devices (now or later) than you will ever need... you can keep it super simple or make it very complex.

\n\n

Manual linking does work, but is a tedious process with a learning curve. You can also add a hub which gives you all those things you don\u2019t want, like internet, schedules, remote from your phone etc etc. but with the added advantage of more simplified setup. (No manual linking -much easier to do linking with hub). AND once the modules are setup (I.e. linked or programmed) the hub can be unplugged and put away and it all continues to work standalone without the hub. I mention that because there are bundles and starter kits that make the hub essentially free.

\n\n

Not obvious to newcomers: 1) Insteon and smarthome.com have the same owner, it\u2019s always least expensive to buy direct from smarthome.com 2) smarthome has lots of sales and promos, never pay the regular website full price. Also don\u2019t miss the multi-packs that also save. 3) many items are available on amazon but mostly avoid it, some of the product has really old date codes and firmware when purchased through amazon.

\n\n

Many Insteon items are available in other electrical standards than USA. Such as European, although it\u2019s easier to see compatibility from the country in question, the US site doesn\u2019t show all international product. See here for more international purchasing options.

\n\n

Disclaimer: I don\u2019t work at smarthome/Insteon. I just have a huge complicated Insteon system with over 100 switches and modules. (The only two non-Insteon switches in my house both control garbage disposals.)

\n" }, { "Id": "3049", "CreationDate": "2018-06-13T06:43:05.023", "Body": "

Can someone here help me out with code to send data from esp32 to aws s3 storage ?

\n\n

I am using esp-idf as I have other code running on the esp32 as well.

\n\n

I am new to aws so any help would be good.

\n", "Title": "Data from ESP32 to AWS S3", "Tags": "|aws|esp32|", "Answer": "

Not the complete answer but a step forward: https://docs.aws.amazon.com/freertos/latest/userguide/getting_started_espressif.html

\n\n

AWS has released their ESP32 support with AWS FreeRTOS.

\n\n

To send directly to S3 without going through IoT Core, you will need your endpoint (obviously) and certificates on the device side to authenticate on the Cloud side. With the policies and Roles on the Cloud side, a trusted device should be able to post on S3.

\n" }, { "Id": "3057", "CreationDate": "2018-06-14T11:25:42.767", "Body": "

I working with several arduino board and now I need to control them via a web interface.

\n\n

Via web interface I want to activate GPIO.

\n\n

I have two ideas:

\n\n
    \n
  1. Each arduino acts as web server and I can control the GPIO via the Arduino web page. Basically one browser tab for each arduino.
  2. \n
  3. Use the MQTT protocol to exchange message with the arduino boards. Furthermore I think to use an raspberry as web server and as MQTT broker. Each arduino board is subscribed to a specific topic and through a web page, hosted on the raspberry, I can control the Arduino GPIOs.
  4. \n
\n\n

The first solution I very quicly and simple.

\n\n

Regarding the second option, I don't know how to send MQTT message via a web page. I read that I need to use Websocket. Is it right? Need I to write code in Javascript or what?

\n\n

My second question is: Can the MQTT broker manage both MQTT and MQTT over Websocket at the same time? Otherwise I need to use the Websocket also on Arduino.

\n\n

Another option is to built a Python script with GUI that allow to send MQTT messages to Arduino.

\n\n

Is there a best way?

\n\n

Thanks for the help!

\n", "Title": "Control arduino via MQTT", "Tags": "|mqtt|raspberry-pi|arduino|", "Answer": "

What you've written all seems reasonable to me.

\n\n

MQTT traditionally runs over TCP1, but your browser does not allow webpages to open a raw TCP socket. There are proposals to allow that, but I doubt they'll be implemented any time soon. So, your browser can't connect to a MQTT broker only supporting TCP connections.

\n\n

The solution is, as you've identified, to use a WebSocket\u2014these are supported by the browser and so some JavaScript code can be used to connect to an MQTT broker through a web page. HiveMQ have an example you can play with, or you can try a library such as MQTT.js which supports WebSocket communication with an MQTT broker.

\n\n

Most brokers\u2014and all I'm aware of\u2014won't care about whether a client is a WebSocket or TCP client. You can happily connect both to one broker, and you can find instructions on how to configure a Mosquitto broker on Stack Overflow2.

\n\n

With regards to a best way... it's up to you. If you are happy with JavaScript, then there's no problem using that. If Python's easier, do that (you wouldn't need to set up WebSockets support that way). You could even just use pre-built client software if you didn't care about the UI too much.

\n\n
\n\n

1 MQTT 3.1.1 does allow for TLS or WebSocket connections also; see section 4.2 of the specification. There is a variant, MQTT-SN, where the requirement for TCP is relaxed. Either way, you're probably not worried about MQTT-SN for your use case.

\n\n

2 Note that on Windows, the Mosquitto build does not have WebSocket support enabled. You will need to build Mosquitto yourself if you want to use it on Windows. Alternatively, you could try a different broker that doesn't restrict you in this way.

\n" }, { "Id": "3068", "CreationDate": "2018-06-18T05:14:29.253", "Body": "

What is the difference between industrial Controller and prototype(Study)level controllers or Raspberry pi/Arduino Vs. Industrial Controllers(PLC, NON PLC) controllers

\n", "Title": "Industrial Controller Vs Prototype Controller(Study level Controller)", "Tags": "|raspberry-pi|microcontrollers|arduino|industry-4.0|plc|", "Answer": "

Industrial controllers are driven by really intense quality requirements. For example, a single controller may be responsible for a bioreactor growing a batch of culture worth $1,000,000USD to a manufacturer. Industrial controllers have to run for months and months without failure.

\n\n

In contrast, prototype, hobby and study controllers such as the Raspberry Pi and Arduino are designed to support exploration and teaching. Although quality is important, it is not a primary concern. For Raspberry Pi controllers, ease of use is paramount. Raspberry Pi excels at ease of use and is a rich platform for experimentation.

\n\n

Here's a specific example. Modern industrial controllers support web services for configuration, monitoring and maintenance. And when I set up my Raspberry Pi controller for aeroponics, I also use web services for the same reason. The Raspberry Pi supports many web servers and I chose NodeJS, which is based on Javascript. Javascript is an interpreted language with automatic memory management. Javascript is a wonderful language for experimentation and exploration. So is Python. But interpreted languages tend to die for mysterious reasons related to memory leaks and fragmentation. Which means my Raspberry Pi dies every now and then for mysterious reasons and has to be rebooted.

\n\n

These failures that I accept for the Raspberry Pi would be unacceptable for an industrial controller. For industrial controllers, I would choose a C++ web server such as Mongoose, which is small, simple and quite robust. Now I could certainly run Mongoose on the Raspberry Pi, but Raspbian itself might not be best for industrial control. Industrial controllers tend to have custom-built and tightly managed operating systems created by Yocto

\n\n

Industrial controllers are often subject to strict regulations, especially in the biotech sector, where mistakes can impact human life and welfare. Even simple things like enclosures are regulated. For example, an industrial controller enclosure is often subject to \"waterproofness ratings\".

\n\n

Taken together, the stringent constraints on industrial controllers cost manufacturers and consumers a lot, but that cost provides high value in terms of safety and yield. Thankfully, many hobbyists are able to create controllers with equivalent functionality (but with less stringent quality) using MCUs such as Arduino and Raspberry Pi. Indeed the latter provide a fertile frontier of new ideas and techniques for all controllers.

\n" }, { "Id": "3085", "CreationDate": "2018-06-22T08:53:02.983", "Body": "

Is there any way to check if actually my embedded device has a working connection (i.e. can reach the WAN)? My device has no RTOS, so I cannot rely on OS functionalities like ping. I can see that the DHCP gives a correct IP to my device, but it's not 100% true that given a correct IP I can then reach for example www.google.com.

\n\n

I've already done some research, and there are different approaches:

\n\n
    \n
  1. it's impossible
  2. \n
  3. it's a stupid question
  4. \n
  5. workarounds of every sort, but I feel confident with respect to SO so I'll give it a try.
  6. \n
\n\n

So, if this is question has 1) or 2) as replies, I'll remove it and it's ok.

\n", "Title": "Embedded C\u2014check internet connection", "Tags": "|networking|microcontrollers|", "Answer": "

Thanks guys for the support. I've finally used the method described by Helmar in which I just try to reach the desired target and see. If I can obtain a reply from the target, I know that my connection is alive and functioning, otherwise I manage to disconnect the device and retry with a fresh new connection.

\n" }, { "Id": "3093", "CreationDate": "2018-06-24T04:55:11.117", "Body": "

I want to understand the persistence related options in Mosquitto as described here.

\n\n

To begin with, do these options apply only in case of QoS > 0 and/or retained messages?

\n\n\n\n
\n

If true, connection, subscription and message data will be written to\n the disk in mosquitto.db at the location dictated by\n persistence_location.

\n
\n\n

What is the meaning of 'message data' - the actual payload? Only when retained or otherwise as well?

\n\n\n\n
\n

If not given, then the current directory is used.

\n
\n\n

What is the current directory relative to?

\n", "Title": "mosquitto - persistence configuration options", "Tags": "|mqtt|mosquitto|publish-subscriber|", "Answer": "

Messages are persisted if

\n\n\n\n

The whole message is stored in memory and sync'd to disk at regular intervals (controlled by the autosave_interval option) or when the broker shuts down to ensure data is not lost.

\n\n

As with all processes the current directory is the location where where the process was started e.g. if your shell is in your home directory /home/user when you run mosquitto then the current directory will be your home directory. When mosquitto is run as a service this will probably be / in which case the mosquitto user would not have permission to write there. It is always better to be explicit where to write logs and persistence data

\n" }, { "Id": "3095", "CreationDate": "2018-06-24T09:10:02.270", "Body": "

I have a MQTT Input node that kicks of database operations (SELECT and INSERT) for PostgreSQL database. The database operations are done with node-contrib-postgres-multi.Since these operations are separated by function nodes, I am saving portion of the message with flow.set and retrieving with flow.get later. For example,

\n\n\n\n

I can't help but imagine that the flow.set and flow.get are not in sync.

\n\n

Currently, I am simulating around 20 devices to publish data every second with timestamp increased by 1 second for every publication. There is absolutely no reason why the generated messages should ever get duplicated. However, the database insert nodes fail because of unique index violation as seen in Node-RED log (.pm2/logs/red-out-0.log).

\n\n

If the function nodes and database processing take say, 2 seconds and the MQTT messages (QoS=0) are received every second, would MQTT or Node-RED buffer them? Therefore, every received message is treated as a Unit of Work until it errors out or 'leaves' the flow into a database, HTTP request, MQTT publish, etc. ?

\n", "Title": "node-red - Units of work", "Tags": "|mqtt|publish-subscriber|node-red|", "Answer": "

It's worth remembering that everything in the NodeJS world is asynchronous which means nothing blocks the event loop.

\n\n

In this case it sounds like you get the first incoming message, which you store with a fixed key in the flow context, you then move on to the Select query. At this point the SQL node is going to end up doing some network IO which will block while it waits for a response from the database so while it waits it will give up the execution context.

\n\n

If while it's waiting a new MQTT messages arrives it will be handled immediately and passed to the first function node which is going to overwrite what ever was stored in the flows context because the key is the same.

\n\n

When the first Select statement returns the first message will move on to the second function node and when it retrieves the value from the flow context it will be the second message not the first.

\n\n

The way to solve this is to not use the context to keep state but to move the information you want to keep from msg.payload to a different key on the msg object. Well behaved Node-RED nodes should always pass on the original msg object and by default only really change the msg.payload (there are exceptions, but they tend to document what they change and why).

\n\n

Using the msg to hold state for a unit of work guarantees that it can only be changed in step with the message travelling through a flow.

\n" }, { "Id": "3099", "CreationDate": "2018-06-25T11:39:50.253", "Body": "

I'm trying to set up AWS IoT authentication with using my own certificate according to the docs. I've managed to register a CA, enabled it as well as set it to auto-register. Also, created device cert & key according to the docs. When I first connect my device using the freshly generated key & cert it won't work. No sign of connection. It should publish a message to the $aws/events/certificates/registered/caCertificateID topic. In the MQTT console I'm unable to see anything in that topic. I've also tried attaching a template according to the JIT provisioning docs, same, no luck, doesn't seem anything to be happening.

\n\n

When I manually register the device cert (aws iot register-certificate --certificate-pem file://deviceCert.pem --ca-certificate-pem file://rootCA.pem) it is then able to connect to AWS.

\n\n

What is going wrong?

\n", "Title": "AWS IoT Authentication - Using own certificate not working", "Tags": "|aws|authentication|", "Answer": "

I ran into a similar problem with a different solution

\n\n

If you create a certificate using a default openssl.cnf, or some other mechanism that generates an SSL certificate such as PHP's openssl_csr_sign, make certain your generated certificate does not have the x509_extensions set to make a CA certificate.

\n\n

After three days of banging my head against the wall, I discovered that Amazon IoT will reject any device certificate that has CA:true set. Use OpenSSL to verify:

\n\n

openssl x509 -in deviceCertificate.pem -text -noout

\n\n

You should see something like this:

\n\n
 X509v3 extensions:\n            X509v3 Basic Constraints: \n                CA:FALSE\n
\n\n

I hope this helps someone save some time.

\n" }, { "Id": "3101", "CreationDate": "2018-06-25T13:03:27.640", "Body": "

I am building a project which has the following requirements:

\n\n
    \n
  1. My hardware device (NanoPi) should access a broker for a video key.
  2. \n
  3. The broker should have a queue which will hold the video keys sent by the server and will forward it to the device on request.
  4. \n
  5. The device will push the video key to the remote server and request for the specific video.
  6. \n
  7. The remote server will send the video to the device and the device will display it on the monitor.
  8. \n
  9. On completion of the video, the device will again request for a new video key.
  10. \n
\n\n

Which broker should I use which will store the video keys in a queue? Will a MQTT broker be suitable for my application? If not which other broker should I use?

\n", "Title": "Can MQTT be used for queuing?", "Tags": "|mqtt|https|streaming|", "Answer": "

I agree to Aurora's answer\u2014there may be better solutions to implement queue instead of MQTT, but still in MQTT it is absolutely possible.

\n\n

You need MQTT broker and clean=false (in MQTT 3.x) or with appropriate expiration period (MQTT 5.0). Then, the logic is pretty straightforward.

\n\n

You have persistent(but not connected) client session, that is subscribed with QoS=1 to correct topic to receive and buffer keys, published by your key-generator.

\n\n

Device connects, authorizes with given session using appropriate client_id and immediately receive keys. You should acknowledge only first key and then close connection.

\n\n

That is\u2014you have one key in client and when MQTT broker receive ack for the packet, it will remove it from session storage.

\n" }, { "Id": "3109", "CreationDate": "2018-06-26T15:04:06.253", "Body": "

Suppose I want to control my room lights over WiFi using mobile phone. But I also want to control them by using the switches in my room to which they are connected. Is it possible to implement them both? I watched and read a couple of tutorials on Home Automation over WiFi but they only focused on controlling the home appliances remotely.

\n", "Title": "Is it possible to control a home appliance remotely as well as using the switch to which it is connected?", "Tags": "|smart-home|lighting|", "Answer": "

I believe the Ilumi light bulb gets as close to this as you can get. It uses bluetooth instead of wifi, but the principle is mostly the same. Basically, the situation is the following:

\n\n\n\n

In order to achieve this, they have the following properties:

\n\n\n\n

With these things in mind, it probably makes a lot of sense that the bulbs use mesh networking instead of a hub.

\n\n

I wouldn't exactly recommend getting these bulbs. That's mostly due to the behavior of the company, though. The mesh networking also isn't always the best, but it has definitely gotten better throughout their updates. However, I do still think that being able to use your normal light switch is the killer feature that all competitors seem to be missing.

\n" }, { "Id": "3120", "CreationDate": "2018-06-27T13:27:57.947", "Body": "

This is my code and on the bottom is the error I keep getting.

\n
import configparser\nfrom time import localtime, strftime\nimport json\nimport paho.mqtt.client as mqtt\n\n\nconfig = configparser.ConfigParser()\nconfig.read('/home/pi/bin/py.conf')     # Broker connection config.\n\nrequestTopic  = 'services/timeservice/request/+'        # Request comes in \nhere. Note wildcard.\nresponseTopic = 'services/timeservice/response/'        # Response goes \nhere. Request ID will be appended later\n\ndef onConnect(client, userdata, flags, rc):\n   print("Connected with result code " + str(rc))\n\ndef onMessage(client, userdata, message):\n   requestTopic = message.topic\n   requestID = requestTopic.split('/')[3]       # obtain requestID as last \nfield from the topic\n\n   print("Received a time request on topic " + requestTopic + ".")\n\n   lTime = strftime('%H:%M:%S', localtime())\n   \n   client.publish((responseTopic + requestID), payload=lTime, qos=0, \nretain=False)\n\ndef onDisconnect(client, userdata, message):\n    print("Disconnected from the broker.")\n\n\n# Create MQTT client instance\nmqttc = mqtt.Client(client_id='raspberrypi', clean_session=True)\n\nmqttc.on_connect = onConnect\nmqttc.on_message = onMessage\nmqttc.on_disconnect = onDisconnect\n
\n

And after I try to connect to a broker:

\n
mqttc.username_pw_set(config['MQTT']['userMQTT'], password=config['MQTT']['passwdMQTT'])\nmqttc.connect(config['MQTT']['hostMQTT'], port=int(config['MQTT']['portMQTT']), keepalive=60, bind_address="")\n
\n

I get the fallowing error:

\n
Traceback (most recent call last):\n  File "<stdin>", line 1, in <module>\n  File "/usr/lib/python3.5/configparser.py", line 956, in __getitem__\n    raise KeyError(key)\nKeyError: 'MQTT'\n
\n

Does anybody know how to fix this error while trying to connect to a broker?

\n", "Title": "KeyError while connectiong to a MQTT broker", "Tags": "|mqtt|", "Answer": "

If you haven't created /home/pi/bin/py.conf... that's your problem.

\n\n
config = configparser.ConfigParser()\nconfig.read('/home/pi/bin/py.conf')\n
\n\n

That code tries to open the file /home/pi/bin/py.conf and read it using an INI-like format. It turns that into a Python dict. Working back from the code, you need to add the following to /home/pi/bin/py.conf:

\n\n
[MQTT]\nuserMQTT = username\npasswdMQTT = password\nhostMQTT = broker address\nportMQTT = broker port\n
\n\n

(filling in the details, of course)

\n" }, { "Id": "3131", "CreationDate": "2018-06-30T21:20:40.300", "Body": "

I'm looking to install smart switches for my lighting. Since I have LED lights and 2-wire circuits to the switches (no neutral), I'm looking at wifi controlled remote relays - which I can find no problem.

\n\n

However, all these switches seem to integrate with smartphone apps - not with something I can fix to the wall in each room. Is there a good reason that I can't find a physical switch that can be hooked into some sort of smart control system? Obviously if there exists something, I can route it via IFTTT to close the loop (and remember that I need some legacy lighting just in case the network goes down)

\n\n

The closest I've found so far is a 433 MHz stick-on wall mount switch, and a 433MHz/WiFi relay - but I'm intrigued about why there is no existing product to meet this application.

\n\n

To clarify one specific use case, my wired switches seem to often turn out to be where I want some furniture - so I'm primarily looking to re-locate the switch without doing any re-wiring (and also gain connectivity). Preferably a manufacturer who addresses the EU market.

\n", "Title": "Controlling WiFi switches from physical switch", "Tags": "|lighting|", "Answer": "

Belkin have their Wemo light switch, which replaces a standard wall switch and allows remote control of ceiling lights.

\n\n

I haven't tried one, but I have the Wemo light bulbs and wireless sockets, and they work well with Home Assistant if you don't want to use Belkin's mobile app.

\n\n

(You still have to do the initial setup with the app, but once it's connected to your network you can ditch the app for HA)

\n\n
\n\n

Edit:\nThis answer mentions Bluetooth Low Energy switches which don't need batteries. I can't find them for sale online yet, though.

\n" }, { "Id": "3138", "CreationDate": "2018-07-03T04:23:13.993", "Body": "

I think, node.error does not halt the flow in all cases. My aim is to catch errors, save the payload and stop the flow.

\n\n

Please refer the observations and experiments below.

\n\n

Observation

\n\n

Function node - Neither error is caught nor does the flow halt.

\n\n

JSON node - Error can be caught and flow halts.

\n\n

Postgres node - Error is not caught but the flow halts.

\n\n

Experiments

\n\n

All of the experiments below have Catch nodes to handle errors from all nodes in the flow.

\n\n

Function node

\n\n

I added a function node that is wired to an Inject node. The output of this node is a database INSERT query, wired to a Postgres node. In the function node, right at the top, I added this node.error('Test error'). While the error can be seen in the console, I can see row inserted into the database. I am not able to catch it either.

\n\n

JSON node

\n\n

I wired a JSON node to a MQTT Input node to transform the JSON string payload to a JSON object. This object is passed to a function that generates a INSERT query statement and is wired to a Postgres node. For testing, I passed a malformed JSON document which causes an error in the JSON node. I can catch it and the flow stops too.

\n\n

Postgres node

\n\n

With good JSON document, no node.error() statement in function node, the Postgres node inserts successfully. For testing, I created a well-formed JSON document that will raise a duplicate on insert message. In this case, I can see the error message in console. But, I can't catch it.

\n\n

Note that, the Postgres node does raise a node.error().

\n\n

Query execution

\n\n

Refer this.

\n\n
try {\n            for (let i=0; i < queries.length; ++i) {\n              const { query, params = {}, output = false } = queries[i];\n              const result = await client.query(query, params);\n\n              if (output && node.output) {\n                outMsg.payload = outMsg.payload.concat(result.rows);\n              }\n            }\n\n            if (node.output) {\n              node.send(outMsg);\n            }\n          } catch(e) {\n            handleError(e, msg);\n          } finally {\n            client.release();\n}\n
\n\n

Error handling

\n\n

Refer this.

\n\n
var handleError = (err, msg) => {\n        node.error(err);\n        console.log(err);\n        console.log(msg.payload);\n      };\n
\n\n

Questions

\n\n

How can I be sure that any node that raises a node.error() will definitely be caught by a Catch node? Or, in which circumstances, will the Catch node not catch a node.error()?

\n", "Title": "node-red - node.error does not stop flow sometimes", "Tags": "|node-red|", "Answer": "

From the documentation:

\n
\n

If the error is one that a user of the node may want to handle for themselves, the function should be called with the original message (or an empty message if this is an Input node) as the second argument:

\n

node.error("hit an error", msg);

\n

This will trigger any Catch nodes present on the same tab.

\n
\n

So calling node.error() with one argument will not trigger a Catch node.

\n" }, { "Id": "3149", "CreationDate": "2018-07-08T10:43:52.297", "Body": "

Is it possible to implement AWS RTOS on nrf51822? If not what is the alternative solution for connecting nrf51822 to IoT cloud?

\n", "Title": "Is it possible to implement AWS free RTOS on nrf51822?", "Tags": "|aws-iot|aws-greengrass|", "Answer": "

The NRF51822 is a Bluetooth low energy device, which is mostly chosen for low power applications. It cannot access the internet using Bluetooth (unless your idea is to access the internet using bluetooth or something of that sort). If your application needs internet access, you should be moving to 802.11a/b/g/n or 2G/3G/4G.

\n\n

So if internet access if what you're looking after, you can choose other options like the Noode MCU, ESP8266 or ESP-32. These are WiFi devices can can access the internet using the IP Stack.

\n\n

Here you'll be able to use AWS Free RTOS, ChibiOS and other RTOS based systems with the MQTT library

\n" }, { "Id": "3165", "CreationDate": "2018-07-12T08:51:23.207", "Body": "

I have been working with a water flow counter which sends data over wifi. I'm using an ESP8266 wifi module. Everything is correctly connected and I've gotten to the point where it's correctly connected to the wifi network and everything, except when I try to ping its IP adress from the cmd (192.168.1.125) it times out.

\n\n

I know this info has something to do: I have read it's something about my ipv4 not being in the 192.168... range. But i'm not sure if that's the issue or if so, how to fix it.

\n\n

I should state that the network is my company's and not my home wi-fi so I would not be able to change the network's IP adress (Maybe if it's really necessary I could speak to the network manager). Would the solution be to match the ESP8266's IP adress to the range of that of the network?) (192.192...)

\n\n

ESP2866 Module IP adress: 192.168.1.125\nMy IPV4 Adress is: 192.192.10.125\nSubnet mask: 255.255.255.0\nDNS servers: 192.192.10.129\nDefault gateway: 192.192.10.252

\n", "Title": "Could not ping ESP8266 Wifi module connected to an Arduino", "Tags": "|networking|wifi|esp8266|communication|arduino|", "Answer": "

255.255.255.0 is the value that determines the SIZE of the subnet.

\n\n

192.168.1.xxx with the subnet mask 255.255.255.0 is a subnet spanning from 192.168.1.0 to 192.168.1.255.

\n\n

192.168.10.xxx with the subnet mask 255.255.255.0 is a subnet spanning from 192.168.10.0 to 192.168.10.255.

\n\n

A router is needed for transporting IP from one subnet to another. Ref. https://en.wikipedia.org/wiki/Router_(computing)

\n\n

All devices in the subnet MUST have a unique IP & MAC address!

\n\n

Cisco has a good \"IP Addressing and Subnetting for New Users\"

\n\n

https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-3.html

\n" }, { "Id": "3172", "CreationDate": "2018-07-13T06:28:34.673", "Body": "

I bought this ESP8266 Relay, and I wish to connect to it (in order to install MicroPython, and use Python code). However, I can't find any way to start working with it.

\n\n

The item is Wi-Fi discoverable, and got connected to. But from here, I couldn't get to its web page to try manage it, as I understand it.

\n\n

I have also asked the seller, but they don't know how to support it.

\n\n

EDIT 1\nupdated link

\n\n

** EDIT 2**

\n\n

added pic.\n\"snapshot

\n\n

How can I connect to it in order to use MicroPython?

\n", "Title": "How to work with an ESP8266 with relay?", "Tags": "|esp8266|", "Answer": "

I would really recommend using C and the Arduino library for this since it is much simpler. There is a library for the esp8266 that makes it very easy to use. There is a straightforward connect method and sending data is just like normal socket programming. Let me know if you would be willing to use C and I will post the code. Good Luck!

\n\n

Edit:

\n\n

This is the example code given in the docs for Arduino and esp8266. As you can see it is very straightforward. I have added some comments to explain this a little better.

\n\n
#include <ESP8266WiFi.h> //This is the library I was talking about\n\nvoid setup()\n{\n  Serial.begin(115200); //Turns on Serial monitor for debugging.\n  Serial.println();\n\n  WiFi.begin(\"network-name\", \"pass-to-network\"); //The \"begin\" command uses the network name and password to connect to your network.\n\n  Serial.print(\"Connecting\");\n  while (WiFi.status() != WL_CONNECTED) //This is just to wait as it connects.\n  {\n    delay(500);\n    Serial.print(\".\");\n  }\n  Serial.println();\n\n  Serial.print(\"Connected, IP address: \");\n  Serial.println(WiFi.localIP()); //This will print the esp's ip\n}\n\nvoid loop() {\n  //Add your code here to control the relay.\n  //use digitalWrite(pin,HIGH) to send a 5 volt output to the pin stated.\n}\n
\n\n

Let me know if this helps.

\n" }, { "Id": "3184", "CreationDate": "2018-07-16T04:26:20.000", "Body": "

I am using Arduino ATmega2560. I have defined Serial as CONSOLE. When I am using CONSOLE it shows me error.

\n
#include "Arduino.h"\n#define CONSOLE Serial;\n\nvoid setup()\n{\n    CONSOLE.begin(9600);\n}\n\nvoid loop()\n{\n    CONSOLE.println("Hello");\n    delay(2000);\n} \n
\n

\"enter

\n", "Title": "Arduino Serial Error", "Tags": "|arduino|", "Answer": "

Try changing this:

\n\n
#define CONSOLE Serial;\n
\n

To this:

\n
#define CONSOLE Serial\n
\n

Notice the absence of the ; character in the change.

\n
\n

The preprocessor is expanding CONSOLE to Serial;, which results in, for example, Serial;.begin(9600);, which has 2 ;s in the statement, the first of which is unwanted.

\n" }, { "Id": "3186", "CreationDate": "2018-07-16T13:16:13.530", "Body": "

I want to measure the power consumption of ESP8266-01. For that I connected my multi-meter in series with ESP8266 to measure the current but then ESP8266 does not power up and multi-meter gives zero.

\n", "Title": "Could not measure power consumption of ESP8266 module", "Tags": "|esp8266|power-consumption|", "Answer": "

This is really an Electrical Engineering question, not an IoT question.

\n\n

What you have discovered is related to the issue of \"burden voltage\". Essentially, modern meters do not directly read current, but instead place a shunt resistor in the circuit (ie, you properly connected the meter in series) and then measure the voltage drop across this.

\n\n

The challenge is in the selection of the shunt resistor - too small, and low currents cause a voltage drop which may not be as many counts of the measuring ADC as desired, too large and the current drawn to operate the system may drop a voltage large enough that the system cannot operate.

\n\n

With something that draws different amounts of power at different times, the problem gets worse - for example, if you have an MCU that starts up at full clock, then goes to sleep, it may be hard to power it through the meter until the point of sleep, on a current scale where the tiny sleep current can be measured. Digital radio systems have this problem as well, as the radio can draw 10 or more times baseline current, in brief pulses.

\n\n
    \n
  1. Still, it's worth experimenting with different ranges on your meter
  2. \n
  3. If your meter is auto ranging, you can use your own shunt resistor, and operate the meter in voltage mode.
  4. \n
  5. In some cases, you may want to start up the circuit with a clip lead bypassing the meter, and then once the circuit is believed to be in the low power mode you want to measure, you can remove the clip lead. If you are lucky this works, but you can end up timing it wrong, or causing a voltage drop that triggers a brownout detector.
  6. \n
\n\n

In a pulsed system, it's often useful to measure over time anyway. I've had some good results using a chosen resistor (depending on need, anywhere from 10-100 ohms) as a shunt, and adding a capacitor to moderate changes. This can then be monitored on an oscilloscope to see how power consumption varies over time.

\n\n

For a more formal study, I've found an INA219 I2C high-side current measurement breakout modified with a larger shunt resistor to be quite useful. It's managed by an Arduino which reads it at a high rate, and logs summary information - peak current, average over time, and integrated over shorter periods of interest. This serial measurement stream then gets merged with debug output from the device being studied, making it possible to see what various operations \"cost\".

\n\n

In taking these measurements, it's also important to consider that I/O interfaces like serial channels and programmers can contribute or steal current, so these either need to be disconnected or their impact evaluated before a reliable measurement can be made.

\n" }, { "Id": "3189", "CreationDate": "2018-07-16T15:40:08.747", "Body": "

From mosquitto.conf, the following options (among many others) exist for bridging.

\n\n\n\n
\n

This variable marks the start of a new bridge connection. It is also\n used to give the bridge a name which is used as the client id on the\n remote broker.

\n
\n\n\n\n
\n

Set the client id for this bridge connection. If not defined, this\n defaults to 'name.hostname', where name is the connection name and\n hostname is the hostname of this computer.

\n
\n\n

If both the configuration options are specified and are different, then which ID applies to the remote broker?

\n", "Title": "mosquitto - bridging options for remote client identifier", "Tags": "|mqtt|mosquitto|bridge|", "Answer": "

Considering the connection line is a required field to start a bridge config remote_clientid will always override, this should be clear from the description of the remote_clientid.

\n\n

It states that the default will be

\n\n
\n

'name.hostname'

\n
\n\n

Where name is the argument given in the connection line.

\n\n

(Also this literally takes 2 mins to test on the command line by starting 2 brokers)

\n" }, { "Id": "3210", "CreationDate": "2018-07-21T08:14:15.677", "Body": "

The Node-RED documentation is clear - set metrics to true to track flow execution and memory usage information. So, I stopped Node-RED with pm2, edited the settings.js file and started Node-RED. But, no metric output is seen in the .pm2/logs/red-out-1.log! The debug nodes in various flows in my application have been configured to output to console and they come correctly.

\n\n

How to view the metrics?

\n", "Title": "node-red - Metrics display with pm2?", "Tags": "|monitoring|node-red|nodejs|", "Answer": "

You have edited the wrong settings.js file. The version in the node_modules/node-red directory is the template that gets copied to the User directory when Node-RED is started for the very first time.

\n\n

When Node-RED starts it logs the settings file it is using as follows:

\n\n
21 Jul 13:31:49 - [warn] ------------------------------------------------------\n21 Jul 13:31:49 - [info] Settings file  : /home/hardillb/.node-red/settings.js\n21 Jul 13:31:49 - [info] User directory : /home/hardillb/.node-red\n21 Jul 13:31:49 - [warn] Projects disabled : set editorTheme.projects.enabled=true to enable\n21 Jul 13:31:49 - [info] Flows file     : /home/hardillb/.node-red/flows_tiefighter.json\n21 Jul 13:31:49 - [info] Server now running at http://127.0.0.1:1880/\n
\n" }, { "Id": "3218", "CreationDate": "2018-07-24T05:12:27.927", "Body": "

I connected nodemcu with batteries and uploaded a code which sends http post request every 10 min and goes to deep sleep. But after 2 hours node mcu stops sending the data.\nVoltage regulator used is LD33V

\n\n

Used AA batteries.

\n\n

Battery level when stopped working was 2.5V.(Do not know why the power consumption was high though we used deepsleep mode).

\n\n
#include <ESP8266WiFi.h>\n#include <WiFiClientSecure.h>\n#include <ESP8266HTTPClient.h>\nlong duration, distance;\n\nchar* ssid = \"****\";\nchar* password = \"******\";\nvoid connectWiFi() {\n  WiFi.begin(ssid, password);\n  while (WiFi.status() != WL_CONNECTED) {\n    WiFi.begin(ssid, password);\n    delay(5000);\n  }\n}\n\nvoid setup() {\n    Serial.begin(115200);\n    WiFi.mode(WIFI_STA);\n    connectWiFi();\n    WiFiClientSecure client;\n    distance = 4;\n    String data = \" {\\\"value\\\":\"+String(distance)+\"}\";\n    if(client.connect(\"********.com\",443)){\n    client.println(\"POST /****/**** HTTP/1.1\");\n    client.println(\"Host:********.com\");\n    client.println(\"Content-Type:application/json\");\n    client.println(\"x-apikey:******************\");\n    client.print(\"Content-Length: \");\n    client.println(data.length());\n    client.println(\"\");\n    client.println(data);\n    }\n    delay(1000);\n    ESP.deepSleep(600000000);\n}\n\nvoid loop() {\n\n}\n
\n", "Title": "NodeMCU connected with batteries stopped working after 2 hours", "Tags": "|esp8266|power-consumption|", "Answer": "

LD33CV quiescent current is 5mA. This is the problem because even in deep sleep mode voltage regulator consumes power.

\n" }, { "Id": "3222", "CreationDate": "2018-07-25T13:44:15.907", "Body": "

Are there any limitations ( mainly amount of messages) when using free MQTT server ( such as iot.eclipse.org ), that causes from time to time not to publish messages?

\n\n

I have 2-3 device connected at home, and for testing purposes, publish about 10-20 a day.

\n", "Title": "Free MQTT brokers", "Tags": "|mqtt|", "Answer": "

The latest actual list of public and free MQTT brokers located here. Some of them provides service just for test purposes as hardillb mentioned, some of them provide even SLA for free service. Please read careful each broker information page about terms of use, SLA and limitations they provide.

\n" }, { "Id": "3228", "CreationDate": "2018-07-28T04:09:28.823", "Body": "

Has anyone reprogrammed a commercially available smart plug, such as the Etekcity Voltson? Are there plugs available with open source Wi-Fi chips in them that perform better than an ESP8266? I\u2019m looking for recommendations on a reprogrammable plug. I want the comms to stay local in my house. I can program it, I just need to find good \u201copen\u201d hardware.

\n\n

I found this Etekcity teardown, ESWO1-USA Etekcity Voltson Smart Wifi Outlet Teardown Internal Photos Etekcity Corporation:

\n\n\n\n

If all the circuits are available, it would be plausible to reprogram or replace the ESP8266EX inside it.

\n\n

This guy does a hack job separating the case, but if he had used plastics tools, it looks like there are clips to snap it back together: YouTube - Etekcity Wifi Smart Plug Teardown.

\n\n

Until I hear something better is available for my purpose, I will give this a shot.

\n", "Title": "Commercial Arduino Reprogrammable Smart Plug", "Tags": "|wifi|arduino|smart-plugs|", "Answer": "

The Sonoff line is very diy-friendly, breaking out the ESP8266 headers you need to program it. Anything Wi-Fi under $10USD/ea commercially will probably be using an ESP8266. They probably won't get a lot cheaper than that, there's just no margin down there. In the DIY market, there's nothing else Wi-Fi and programmable under $5/ea and commonly available. I've heard of a few contenders, an \"ESP killer\", but nothing has taken off yet.

\n\n

In terms of performance, again, there is not a lot of competition. The ESP32 is a lot faster, and more importantly for IoT, is dual core. This allows you to notify a slow server of a button press without missing a 2nd button press while waiting; if done right, the 2nd press notification will be sent out as soon as the first is done. Speed-wise, the ESP32 compares favorably to boards 2-3 times the cost (under $10USD/ea), and has an alphabet soup of hardware interface support. If you need more that that, you should look into pi-shaped SBCs instead of MCUs.

\n\n

I've seen nodeMCUs connect over a kilometer away, albeit barely. The router position makes a big difference, and I don't think more expensive MCUs will offer any better range, you would be better off getting an ESP8266 with an antenna jack and a large external antenna. You should have no issue with range using one of those. The lag on ESP's http interface is incredibly low, I've turned around packets in 8ms. I get over \"five 9s\" of uptime long-term, based on my server logs. The Wi-Fi send bandwidth is about 20KB/s, which is plenty for most automation tasks, and the receive is twice as fast. You can get \"Wemos pro\" boards with 16MB of flash if you need it, but with Wi-Fi built-in, and server storage so cheap, you shouldn't need a whole lot on-board.

\n\n

The main drawback of ESP8266s is the TLS support, as you mentioned. It's slow and the environment is not really designed in a way it can be made much better. The ESP32 is much better about this, and will become much much better soon, when the hardware crypto accelerations are all wired up in the SDK. At any rate, you said you want to keep it local anyway. It's hard to use HTTPS locally anyway, since you don't have a domain and easy to use cert on localhost. Also consider that your Wi-Fi is already encrypted, the security depends on your password's complexity. If someone is on your LAN, you likely have bigger issues than them surreptitiously turning off your lights.

\n\n

In short, at the price points we're talking about here, the ESP8266 is (still) as good as it gets now, and it's a whole lot better than it was a couple years ago. I was secretly hoping lot of people would jump in and recommend something I didn't know about, and maybe down the road we can update this, but if you can't be with one you love, learn to love the one your'e with; it's good enough for the rest of us. For now.

\n" }, { "Id": "3235", "CreationDate": "2018-08-01T13:19:41.563", "Body": "

If wifi that my Nest camera is connected to goes down, can it reconnect to an alternative wifi? In other words if the \"thieves\" cut the broadband cable, but I still have a 4G router as a backup, would it work?

\n", "Title": "Nest cam fallback wifi", "Tags": "|wifi|digital-cameras|surveillance-cameras|nest-cam|", "Answer": "

A better approach is to get a broadband router that will fall back to using a 3G/4G USB stick if the broadband line goes down.

\n\n

This means any devices (such as your Nest cam) doesn't need to know about 2 different networks, the router handles all that for all devices.

\n\n

There are plenty of these on the market as it's a standard fall back for small businesses that don't want to install a back up second line.

\n" }, { "Id": "3236", "CreationDate": "2018-08-01T14:44:58.433", "Body": "

I have some Philips Hue bulbs and a Bridge set up in my house.

\n\n

When I am at home the Philips Hue App can connect to and control everything.

\n\n

However, when I am away from home (or simulate such by turning off WiFi on my phone) the App is unable to connect.

\n\n

The Out of home control section in Settings shows Not logged in

\n\n

I tap it, then the Log in button and a browser window opens connecting to api.meethue.com then redirects to account.meethue.com

\n\n

I sign in to the browser page and am asked to grant permission to the App and I click yes

\n\n

The Out of home control section then says Logging in... in orange and a red notification appears saying Unable to connect

\n\n

Eventually, another alert pops up saying Can't log in to My Hue because the Hue Bridge is offline. Make sure the Hue Bridge is connected to the internet.

\n\n

Then the Out of home control section reverts to Not logged in

\n\n

The Bridge is definitely connected to the internet - all 3 of the blue LEDs are lit.

\n\n

What am I doing wrong?

\n\n

EDIT: I have the same problem on my iPad and on the Hue Labs website

\n", "Title": "Philips Hue Out of home control won't log in", "Tags": "|philips-hue|", "Answer": "

It was a port issue on my network.

\n\n

I found a Reddit post which mentioned UPnP and having Port 80 available on the router.

\n\n

I have a Windows Home Server running on my network which opens up Port 80 via UPnP for some of it's services.

\n\n

I disabled those services but still couldn't connect via the App or Labs

\n\n

I restarted the Bridge and can now connect.

\n\n

So, the answer is - make sure the Bridge can get port 80 on your router.

\n\n

EDIT: I have since re-enabled the Windows Home Server services and both are now playing nicely together!

\n" }, { "Id": "3241", "CreationDate": "2018-08-01T18:49:08.810", "Body": "

I am looking to build a smart device with no external controls (no buttons, display, etc.) and am trying to figure out how to most easily allow a user to supply Wifi configuration to it.

\n\n

A specific device that comes to mind is the Chromecast; to configure this device, a user simply goes to a URL (something like chromecast.com/setup) and is able to configure the device from there. How is something like that implemented on a technical level? The site must redirect to a local I.P. or something, but that wouldn't make sense because there is no guarantee that the device has any specific I.P.

\n\n

In short, I am looking for a description of how such a feature on something like the Chromecast is implemented.

\n", "Title": "How to configure Wifi on a headless smart device?", "Tags": "|security|wifi|", "Answer": "

It turns out that the Chromecast uses a fairly common method of getting Wi-Fi details\u2014it acts as a Wi-Fi access point when turned on. The apps wrap the setup process up neatly, but essentially the process is this:

\n\n\n\n

This isn't an unusual method (and similar products might just have you directly connect to the network from your phone/computer, loading up a landing page). All that's different in this case is that the app handles the steps for you.

\n" }, { "Id": "3246", "CreationDate": "2018-08-02T15:58:45.723", "Body": "

I am trying to implement a control software (client) that communicates with a number of distributed devices (servers) over Modbus. I have been reading device documents to find out where the data I want to collect is stored in registers, but I never felt like I understood the location decisions that were being made to create the Modbus servers.

\n\n

What kinds of data is typically stored in each of the addressing zones? What is a Coil?

\n\n

Taken from http://www.simplymodbus.ca/FAQ.htm:\n\"\"

\n", "Title": "What are the differences between Modbus addressing zones?", "Tags": "|protocols|", "Answer": "

While Bence is correct, you have also asked

\n\n
\n

I have been reading device documents to find out where the data I want\n to collect is stored in registers, but I never felt like I understood\n the location decisions that were being made to create the Modbus\n servers

\n
\n\n

The only way to find where the data is located in the registers is through the documentation provided with the device.

\n\n

Now talking about the decision, about where to place which data, consider the following example,

\n\n

You have a temperature controller which provides you with the following data,\n1. On/Off status\n2. Temp Set Point\n3. Actual Temp (typically called process value)

\n\n

Then,

\n\n

On/Off Status:\nIf you want to allow external device/software to turn the device on/off then you would place it inside coils else if you just want to inform the users about the on/off status then you would place it in contacts making it read only thus restricting external control.

\n\n

Set Point\nTypically you would want an operator to change the set point and this value would need more than a bit so this would be placed in a holding register so you can read and write into it.

\n\n

Actual Temp\nthis is something which is always read-only as you are reporting the actual sensed temperature and so should be placed in an input register.

\n\n

If you want to integrate MODBUS communication into your software try NMODBUS which is a free MODBUS library and has worked great for me.

\n\n

https://github.com/NModbus/NModbus

\n\n

Hope that helps.

\n" }, { "Id": "3262", "CreationDate": "2018-08-06T14:54:22.950", "Body": "

I'm realy a newbie in cryptography, I want to do MQTT payload encryption with AES, i've done it with PyCrypto library, but i'm still wondering how I can encrypt AES key before sending it to the subscriber, so i choose Python library called PyNacl to do that with ECC (curve25519), but i have no idea how to exchange the public key between the publisher and subscriber, have you any idea to do that ?

\n", "Title": "mqtt publish/subscribe key exchange?", "Tags": "|mqtt|cryptography|", "Answer": "

Public keys by definition are public (no need to keep them secret).

\n\n

So there is no reason it can't be available for download via http or at a push published as a retained message on a known MQTT topic.

\n" }, { "Id": "3266", "CreationDate": "2018-08-07T06:22:59.513", "Body": "

It is said that there is no standard for IoT Protocol Communication. What does it mean to say that?

\n", "Title": "What does it exactly mean to say the IoT Protocols are not standardized?", "Tags": "|standards|", "Answer": "

I think what you mean to say is, the on field M2M communication protocols are not standardised for e.g, some devices use zigbee, some use zwave some use ble etc.

\n\n

So in a premises when you have multiple devices from multiple vendors all implementing different protocols like the ones mentioned above, it becomes a problem to fetch the data from all these devices as you need a middle layer/device which would support all these different protocols fetch the data and push it forward. So there is no single communication protocol used by all manufacturers. This is what people mean typically when they say there is no standardization of protocols for IOT.

\n\n

Protocols like MQTT and CoAP are typically used for exchange of data between a field gateway and a remote server (like the azure iot hub).

\n\n

Hope it made sense.

\n" }, { "Id": "3276", "CreationDate": "2018-08-12T04:20:28.047", "Body": "

I'm looking for the GPIO's I can use in SonOff Dual ( after flashing my own software ).\nIn SonOff Basic - there are GPIOs to acess LED, Relay, and what is most needed for me - GPIO14 for external input as shown here

\n\n

Since I need 2 Relays solution, I need 2 external inputs.\nI'll be happy to get some assistance in finding those \"free GPIOs\"

\n", "Title": "Sonoff Dual - Where to find GPIOs", "Tags": "|esp8266|gpio|", "Answer": "

You can find what you need at the bottom of the aforementioned link.

\n\n

Further to this, my advice is that the schematic shows that, apart from the relay, button and LED GPIOs, (which you probably should not play with, since they already have functions), the Rx and Tx pins (clearly labelled on your board) are also GPIO pins. You can solder another header on there. I don't know what software \"your own\" is but firmware like TASMOTA, for example, allows these pins to be reconfigured for whatever function you wish: they can take on the functions of GPIO4 and GPIO14 on other style boards, or even be used as I2C.

\n\n

You should take confidence from the fact that I have used these schematics to add 2.5mm jacks to Sonoff Basics. Be sure to check against the version number on your board. There are instructions out there on how to modify a TH, but I think it's a really old board.

\n" }, { "Id": "3284", "CreationDate": "2018-08-15T08:38:57.537", "Body": "

I am an experienced developper, and I want to learn IoT beginning with a small project. Automatic Watering the plants of my house.

\n\n

Here are the requirements:

\n\n\n\n

Should I use Arduino or RasperryPi ?

\n\n

What kind of architecture should I use ?

\n\n

Or basically, any idea of what should I read, where should I start?

\n", "Title": "First steps to learn IoT with watering plants project", "Tags": "|raspberry-pi|aws-iot|arduino|", "Answer": "

This is a very generic question, the answers mainly depend on your existing skills, and if you desire to progress into developments that might be commercially relevant. Depending how far you want to plan your learning ahead, you might want to start simple, and upgrade the architecture/implementation as you go along.

\n\n

Platform

\n\n

An SBC (Pi or similar) is great if your want to focus on high level software. This question addresses some of the reasons that working with an SBC and an MCU present different experiences. An SBC is rather power hungry and might not come with built in short range connectivity.

\n\n

There are small mcu boards with either WiFi or BLE built in, these are much better suited for battery powered operation. MCU boards can be coded in python, the micro:bit has bluetooth and supports micropython (but might not be well optimised for ultra-low power if you use it like this).

\n\n

If you care about making a secure platform (now or in the future) then you might also care about having secure on-chip memory, good entropy sources, etc.

\n\n

Architecture

\n\n

The 'many nodes, one hub/gateway' approach is good for battery powered devices. You can have an array of battery powered/short range devices (mesh or otherwise), communicating with a central SBC device. The SBC handles your WAN/cloud interface and also some stand-alone features if necessary.

\n\n

If you make each node a peer (with wifi/WAN access) then you only need to write one software stack, but it's more complex, and you end up being reliant on LAN for any communications - so power outage operation isn't possible.

\n\n

To clarify the types of devices:

\n\n

SBC Single Board Computer, the Raspberry Pi is the most common. These can run linux, and might also be a NAS, a WiFi router, smart home device or a mobile phone/tablet which is running a bit of software to handle the automation task.

\n\n

MCU A much wider class of device, not necessarily significantly lower in processing capability, but more likely to run a real-time OS, and more likely to use event driven programming. There are lots of small eval boards, WiFi or Bluetooth modules with some spare cycles, and dedicated small form factor boards like the teensy series. An MCU might be better to interface to certain types of sensor, and will often have a wide range of interfaces available (even displays).

\n" }, { "Id": "3293", "CreationDate": "2018-08-16T14:39:50.247", "Body": "

I am wondering about best practices of topic naming and payload design of mqtt messages.

\n\n

Is it better to have multiple and long topic names instead and a bigger payload or short topic name with bigger payload.

\n\n

For example:

\n\n
plant1/machineA/sensorX/temperature/value 20\nplant1/machineA/sensorX/temperature/unit C\nplant1/machineA/sensorX/temperature/timestamp 2018-08-01T12:00:30.123Z\n
\n\n

vs.

\n\n
plant1/machineA/\n{\n  [\"sensorX\": {\n   \"value\": 20,\n   \"unit\": \"C\",\n   \"timestamp\": \"2018-08-01T12:00:30.123Z\"\n  }]\n}\n
\n\n

There are a lot more possibilities. But is there a general approach? As much as possible in topic name or in payload?

\n", "Title": "MQTT multiple topics vs. bigger payload", "Tags": "|mqtt|publish-subscriber|", "Answer": "

The decision should be made based on how are you using topics. If you need values together - post them into one topic, if you are using separately - put them into separate topics. Also do not post values, like in your first sample, in the topics. And do not create exceed topics, like sensorX/temperature for temperature sensors, go from more general in the beginning to more specific in the end.\nSo I would recommend you to post into:

\n\n
plant1/machineA/sensorX\n{\n   \"value\": 20,\n   \"unit\": \"C\",\n   \"timestamp\": \"2018-08-01T12:00:30.123Z\"\n}\n
\n\n

or if you have various types of sensors and want to select between them put temperature somewhere before sensor name:

\n\n
plant1/machineA/temperature/sensorX\n
\n\n

or also possible case is:

\n\n
temperature/plant1/machineA/sensorX\n
\n" }, { "Id": "3299", "CreationDate": "2018-08-18T20:39:40.383", "Body": "

If IoT protocol has small bandwidth consumption, does that mean its speed is slower?

\n\n

I heard that MQTT is perfect for when we are restricted in bandwidth, but does that make it slower?

\n", "Title": "What is the difference between bandwidth consumption and speed of IoT protocols?", "Tags": "|mqtt|", "Answer": "
\n

I heard that MQTT is perfect for when we are restricted in bandwidth, but does that make it slower?

\n
\n\n

No, if anything the opposite. The fewer bytes you have to send the shorter the time for the total message to arrive at its destination for a fixed rate of transfer.

\n\n

MQTT is said to have a low overhead, this means that it sends a very small amount of extra data as well as the actual content of the message you want to send. E.g. when the header can be incredibly small when compared to something like HTTP which has a very verbose text header including things like User-Agent and etag where as the MQTT packet header just includes the topic, message size and some bit flags for QOS and Retained state, apart from the topic all of this information is encoded in the smallest possible binary form.

\n\n

The original use case for MQTT was to send data back from an oil pipeline over a satellite network connection, the price was very high to send each byte of information.

\n" }, { "Id": "3309", "CreationDate": "2018-08-23T14:49:53.723", "Body": "

I've seen a lot of Smart Plugs for sale online like the ones below.

\n\n

The next logical step to me seems to have integrated plug sockets in the wall as the smart plugs are quite bulky. But I can't for the life of me find any...

\n\n

Do they exist? If so, where? If not, why?

\n\n

\"Smart

\n", "Title": "Do integrated smart plug sockets exist?", "Tags": "|smart-home|smart-plugs|", "Answer": "

There is also a company called \"Allterco Robotics\". They introduced the \"Shelly\". This is a very small piece of ESP8266 Hardware and some open source software.

\n\n

You can place them behind your existing plugs and control your plugs via WiFi afterwards.

\n\n

https://shelly.cloud/shelly1-open-source/

\n" }, { "Id": "3317", "CreationDate": "2018-08-25T11:30:47.153", "Body": "

We are building a solution as described below:

\n\n
    \n
  1. Arduino-based sensors read data and send it to Raspberry every second.
  2. \n
  3. Raspberry Pi processes these data and then communicates with a backend system using an exposed Web Service every minute.
  4. \n
  5. the solution is in an industrial environment.
  6. \n
\n\n

My Questions:

\n\n
    \n
  1. Do I need middleware for this job?
  2. \n
  3. If the answer is yes, can I use a Raspberry for this or better use a server/VM for this purpose?
  4. \n
  5. Any middleware recommended?
  6. \n
\n", "Title": "IoT and middleware", "Tags": "|raspberry-pi|arduino|", "Answer": "

The concept of middleware is a little ambiguous. You mean a middleware between your backend and raspberry/ies?

\n\n

A middleware is useful when you have different devices and protocols communicating to your backend. Its job is to handle communication in a easier way without having you to worry about which device is using which protocol. However it adds complexity to your system (communication errors handling, deployment, scalability and so on).

\n\n

In your case if you have one raspberry or many of them (since raspberry is communicating the data to the backend, Arduinos here should not even considered) communicating the same kind of data and you think your system will not change much then no, I think you will not need it.

\n\n

If you think in the future your system will have to interface with many protocols (http, MQTT and other protocols) and in different ways then yes a middleware might be needed.

\n" }, { "Id": "3325", "CreationDate": "2018-08-25T19:14:22.070", "Body": "

I have an Onion Omega 2 which I haven't used for a year or more and would like to use for an IoT project.

\n\n

I have attached it to the Expansion Dock, and connected it to my laptop with a USB cable.

\n\n

Four white LEDs flicker constantly, but I don't see the Omega on my laptop's list of WiFi devices. Nor can I discover it with Bonjour.

\n\n

Having looked through the troubleshooting docs, I decided to follow the instructions at

\n\n

https://docs.onion.io/omega2-docs/connecting-to-the-omega-terminal.html#connecting-to-ssh-windows

\n\n

However, when I try to use Putty to connect to omega-ABCD.local where ABCD are the last digits of the Omega's MAC address, using port 22, Putty says that it can't connect because the \"Host does not exist\".

\n\n

Any ideas how I can connect?

\n", "Title": "Can't connect to Onion Omega 2", "Tags": "|onion-omega2|", "Answer": "

With a malfunctioning system, you want to connect in a way that does not depend on networking. MT7688 systems including the Onion Omega have logic-level serial consoles.

\n\n

The \"Expansion Dock\" should already have a USB serial function on it, if not you'll need to get a 3.3v logic level USB serial converter.

\n\n

You'll have to see the onion docs to determine which pins have the serial interface and what the baud rate is, if I recall it is on a different port than is typical for MT7688's (though that's only relevant after looking up the physical module pinout) and may be 57600 baud vs the 115200 used by most other MT7688 systems. If using the USB serial on the dock, you should only need to worry about the baud rate.

\n\n

There is also a button-based mechanism for doing a factory reset, though that would not work in the case of corruption of the flash partitions that user operations do not normally touch, such as might result from an incomplete firmware update.

\n\n

Generally speaking you'd do better pursuing this on the Onion community site. While the company itself has a rather poor record (see the many issues documented on that site), there is a fair amount of community support there.

\n" }, { "Id": "3336", "CreationDate": "2018-08-28T20:17:17.337", "Body": "

Not sure this is the correct StackExchange site, but I have a whole home audio system in my home. I have 2 Sonos units connected to it as sources, and they work fine. Is there a way to use my iPhone as a remote source for a whole home setup? Can't be bluetooth, and it would have to be a solution where I can remotely use my phone (on the same network is fine), but whatever is playing on my phone can be input to the preamp as a source.

\n\n

I am thinking maybe there is some device you can buy that will \"listen\" to your phone and send the signal into the preamp?

\n", "Title": "Using an iPhone as a source for a whole home audio system", "Tags": "|audio|", "Answer": "

Sonos supports Airplay so you should be able to just steam audio from the phone to the to that and have it feed into your existing system.

\n" }, { "Id": "3345", "CreationDate": "2018-08-31T11:33:13.637", "Body": "

This is my policy document:

\n\n
{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": \"iot:Connect\",\n      \"Resource\": [\n        \"arn:aws:iot:us-east-2:000000000000:client/sub\",\n        \"arn:aws:iot:us-east-2:000000000000:client/pub\"\n      ]\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": \"iot:Subscribe\",\n      \"Resource\": \"arn:aws:iot:us-east-2:000000000000:topicfilter/org/cid/+/data\"\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": \"iot:Publish\",\n      \"Resource\": \"arn:aws:iot:us-east-2:000000000000:topic/org/cid/sample/data\"\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": \"iot:Receive\",\n      \"Resource\": \"arn:aws:iot:us-east-2:000000000000:topic/org/cid/sample/data\"\n    }\n  ]\n}\n
\n\n

This is my publishing client:

\n\n
mosquitto_pub -h endpoint-ats.iot.us-east-2.amazonaws.com -p 8883 -i pub --cafile aws-iot-root-ca-1.pem --cert pub-certificate.pem.crt --key pub-private.pem.key -t /org/cid/sample/data -m 'Hello'\n
\n\n

And, this is my subscribing client:

\n\n
mosquitto_sub -h endpoint-ats.iot.us-east-2.amazonaws.com -p 8883 -i sub --cafile aws-iot-root-ca-1.pem --cert sub-certificate.pem.crt --key sub-private.pem.key -t /org/cid/+/data  -d\n
\n\n

The subscription never goes through; it keeps reconnecting.

\n\n
Client sub sending CONNECT\nClient sub received CONNACK\nClient sub sending SUBSCRIBE (Mid: 1, Topic: /org/cid/+/data, QoS: 0)\nClient sub sending CONNECT\n
\n\n

The certificate is attached to the policy correctly.

\n\n

Is there an option to define publish/subscribe settings per client identifier? What am I missing?

\n", "Title": "aws iot - mosquitto_sub does not subscribe", "Tags": "|mqtt|mosquitto|aws|", "Answer": "

Two things:

\n\n
    \n
  1. The + as a wild card for subscription is NOT honored. From the documentation:
  2. \n
\n\n
\n

The MQTT wildcard character '+' is not treated as a wildcard within a\n policy. Attempts to subscribe to topic filters that match the pattern\n foo/+/bar like foo/baz/bar or foo/goo/bar fails and causes the client\n to disconnect.

\n
\n\n
    \n
  1. The topic string shouldn't have the leading the leading slash.
  2. \n
\n\n

Therefore, I changed the policy to have the exact topic string and in my pub and sub clients removed the leading slash. It works now.

\n\n

:roll-eyes:

\n" }, { "Id": "3346", "CreationDate": "2018-08-31T12:55:16.153", "Body": "

I've bought a pair of 433MHz RF Tx Rx modules.

\n\n

I've been struggling to find any information on getting them working on Android Things (Raspberry Pi).

\n\n

My goal is to set up AT to detect the RF transmission upon a doorbell press and then notify my phone if i'm not in. I'm an Android dev so the software side doesn't phase me, but I'm new to IoT so any and all advice is more than welcome.

\n", "Title": "Can I use Android Things to detect a doorbell press via an RF Transmitter/Receiver pair?", "Tags": "|raspberry-pi|android-things|", "Answer": "

Given that Linux devices do process scheduling, the application in Android Things is not going to be able to reliably check the signal coming from a 433 MHz radio directly. It just isn't able to check at 433 million times each second.

\n\n

But you can probably at some radio components to your receiver before it goes into your Android Things board, like an envelope detector, so that you can demodulate it to an extent and get a better digital input.

\n" }, { "Id": "3355", "CreationDate": "2018-09-04T14:06:52.833", "Body": "

I tried to install Home Assistant on a Raspberry Pi Zero W, following this blog post. \nI followed all steps, including starting the daemon as needed. \nWiFi is connected, and accessible via \u2018ssh\u2019

\n\n

After 30 minutes, verified using \u2018htop\u2019 that update process has ended, I tried to connect using another PC - with no luck.

\n\n

Can someone tell if additional processes are needed in order to run it?

\n\n
\n

EDIT1:

\n
\n\n
pi@hassbian:/home/homeassistant $ sudo systemctl status home-assistant@homeassistant.service \n\u25cf home-assistant@homeassistant.service - Home Assistant for homeassistant\n   Loaded: loaded (/etc/systemd/system/home-assistant@homeassistant.service; enabled)\n   Active: failed (Result: exit-code) since Mon 2018-09-03 20:45:07 UTC; 17h ago\n  Process: 631 ExecStart=/srv/homeassistant/bin/hass (code=exited, status=203/EXEC)\n Main PID: 631 (code=exited, status=203/EXEC)\n\nSep 03 20:45:07 hassbian systemd[1]: Started Home Assistant for homeassistant.\nSep 03 20:45:07 hassbian systemd[1]: home-assistant@homeassistant.service: main process exited, code=exited, status=203/EXEC\nSep 03 20:45:07 hassbian systemd[1]: Unit home-assistant@homeassistant.service entered failed state.\npi@hassbian:/home/homeassistant $ \n
\n\n
\n

EDIT2:

\n
\n\n
pi@hassbian:/home/homeassistant $ sudo journalctl -u install_homeassistant.service\n-- Logs begin at Mon 2018-09-03 20:44:51 UTC, end at Tue 2018-09-04 17:14:54 UTC. --\n
\n\n
\n

EDIT 3: new install of version 1.4\n first link referred to v1.3 ( as noted in post that suites to RPI zero ). After installing up-to-date software (v1.4), system is operting as needed.

\n
\n", "Title": "Home Assistant on Raspberry Pi Zero W", "Tags": "|smart-home|raspberry-pi|", "Answer": "

Installing latest version,version 1.4 , solved it.

\n" }, { "Id": "3377", "CreationDate": "2018-09-08T22:18:31.607", "Body": "

Assume we need to do a type of digital signature such as ECDSA with sensors used in sensor networks. So, is there a secure and possible approach to store the keys in these sensors?

\n\n

For example, is it possible to integrate a secure element with these sensors to perform securely digital signature? Or maybe there is a better approach to do this?

\n\n

Meanwhile, do we have energy problem for generating signatures? (Regarding to energy limitation of sensors.)

\n\n

In general, is it logical to generate signatures with sensors? (Assume we need signing messages by sensors, because of security issues.)

\n", "Title": "Possibility of integrating secure elements with sensor networks?", "Tags": "|security|sensors|cryptography|", "Answer": "

Yes, any modern IoT endpoint software stack will support what you're looking for. I'm not sure that 'signed' is the most accurate way to describe this, but encrypted messaging is a basic requirement. The messaging encryption is likely to use TLS, but you need at least to ensure:

\n\n\n\n

Finally, if you need to ask these questions, you really need to find a supplier who you can trust rather than trying to roll your own. Unless the whole stack is reasonably secure, you're pretty much wasting your own time by adding a token partial solution.

\n" }, { "Id": "3388", "CreationDate": "2018-09-11T05:53:06.317", "Body": "

I want a MQTT subscription client to consume messages from AWS IoT broker. On AWS IoT broker, wild card topic subscription is not supported. In our use case, we know the upper bound of number of possible subscriptions. We also know, when a device is set-up to deliver publications. Finally, the processing for every subscription is exactly the same. I have a message flow that is well tested for processing the data published from one device.

\n\n
    \n
  1. Is there a programmatic way to deploy that flow as a new flow with only the MQTT Input node configured for a specific topic string?
  2. \n
  3. Conversely, is there a way to \"un-deploy\" a particular copy of flow for a particular topic string in cases where device maybe decommissioned?
  4. \n
\n", "Title": "node-red - Replicating message flows per MQTT subscribing client", "Tags": "|mqtt|node-red|deployment|", "Answer": "

You need to look at the Node-RED Admin API, this will allow you to retrieve the current flow, you can then manipulate it and push an updated version back to the runtime.

\n\n

Flows are represented as JSON objects that have entries for each node on the canvas and the links between them.

\n\n

The best place to get help with the API will be on the Node-RED forum or Slack team. Both are linked to from the Node-RED homepage.

\n" }, { "Id": "3397", "CreationDate": "2018-09-13T05:39:39.547", "Body": "

How can I see the messages that are published to the Mosquitto broker which has been locally installed in Ubuntu?

\n\n

I have used Spring Integration and think it worked fine. I also tried commands in the Command Prompt after searching on the Internet to view the published data in broker.

\n", "Title": "View the messages sent to the local Mosquitto server", "Tags": "|mqtt|mosquitto|", "Answer": "

To publish a message you should try like this:

\n\n
mosquitto_pub -h localhost -t YourTopic -m \"Your message\"\n
\n\n

Now if you want to see that message you should try like this:

\n\n
mosquitto_sub -h localhost -t YourTopic\n
\n\n

Please note in some circumstances you cannot see a published message:

\n\n
    \n
  1. Your broker is not installed properly
  2. \n
  3. Your broker is installed on a device, and also your subscriber and publisher devices are different devices. In that case you should make sure all the devices are connected to the same network and all of them can listen each other without firewall interfering (try pinging all devices from each other).
  4. \n
\n" }, { "Id": "3410", "CreationDate": "2018-09-14T13:05:44.017", "Body": "

I am designing and building my own house. I am very interested in automatizing several devices as blinders and lights.

\n\n

In general, I am thinking in using Zigbee (Xiaomi and Aqara), RF and IR devices. The latter two will be controlled by a RF + IR controller.

\n\n

Each device itself won't probably consume too much. However, imagine all my switches are connected and my blinders are always waiting for an RF signal to activate them. All those devices needs then to work with DC, which makes them, at least, need the use of an AC/DC converter.

\n\n

How much could this devices consume? Is it relevant? Should I worry about it and try to reduce or optimize the number of connected devices?

\n", "Title": "How relevant is the energy consumption of IoT and RF home devices?", "Tags": "|smart-home|power-consumption|", "Answer": "

Yes, it is perfectly possible that a domestic 'smart' installation will struggle to break even in term of saved energy - even though that isn't quite the question you asked.

\n\n

The RF side is fairly efficient, depending on the protocol. A receiver can be fairly low power (finger in the air of sub 1 Watt), and even if the transmit side is 5 watts, the duty cycle is very low. The trick is that maybe you end up adding these to every bulb (which was 3-5 W initially), so your lighting has become ~20% less efficient. Actually, it's far worse because the lighting would only be on for a few hours, but the overhead is on 24/7. So you've potentially halved your efficiency.

\n\n

Taking a step back, the resting load of your house is probably in the order of 100-200 W (assuming you're not wasteful, and not an agressive optimiser). In that context, adding a handful of extra devices won't burn a lot of power (compared to taking a shower or an extra round of hot drinks each day).

\n\n

Where you can save is if you can make use of the smarts to improve your space heating or water heating (maybe also with follow-me lighting if you're otherwise missing this). Your big loads only need a small optimisation to gain back a worthwhile amount of energy - it takes some effort to determine that the house is empty and doesn't need heating, but you'll easily justify leaving the router on 24/7 to achieve this.

\n" }, { "Id": "3413", "CreationDate": "2018-09-15T09:39:20.863", "Body": "

I'm having trouble installing Android Things on an SD card. The card itself seems fine, it's recognised by my Macbook Pro. I'm using the same command i used for the last pi I set up.

\n\n

Below is the command and the response I'm getting back.

\n\n

I'm pretty new to Android Things but this command worked like a charm last time and nothing has changed AFAIK so no idea where to go from here.

\n\n
Jamies-MBP:ic-self-help jampez77$ sudo ~/Downloads/android-things-setup-utility/android-things-setup-utility-macos\nPassword:\nsudo: /Users/jampez77/Downloads/android-things-setup-utility/android-things-setup-utility-macos: command not found\n
\n", "Title": "Command not found while trying to flash Android Things", "Tags": "|android-things|", "Answer": "

The error implies that the file you are trying to run (/Users/jampez77/Downloads/android-things-setup-utility/android-things-setup-utility-macos) is just not where you've said it is.

\n\n

Rather than trying to run it from a long path, which introduces more chance of a typo, try running it from the directory e.g.

\n\n
cd ~/Downloads/android-things-setup-utility/\nsudo ./android-things-setup-utility-macos\n
\n\n

Also check that directory is still there and that actual command has the execute bit set.

\n" }, { "Id": "3437", "CreationDate": "2018-09-23T06:20:39.307", "Body": "

My friend and I are in a university and we have been asked by the agriculture professor to create automated watering systems for plants. We want to have control over the grow lights so are there any commercial product that has an API that I can turn on/off/query power consumption through a request HTTP request, etc.

\n\n

We just need to control 300w-1200w equivalent LED grow lights.

\n", "Title": "Are there any smart electrical plugs with an open API?", "Tags": "|smart-plugs|rest-api|", "Answer": "

Generally speaking, in such case one would simply use a relay. A relay is exactly a "smart plug" without the smart part, basically a relay is a simple device that will flip a switch based on a command, that command can be simply to put a certain voltage to it's pin (most likely 3.3V or 5V when it comes to IOT). The command can also be via I2C, for instance with this relay you will have to use I2C to explain to the relay board which switch you want to flip.

\n

So one would basically connect a relay to a Micro Controller Unit (MCU) with the necessary connectivity to receive the command and pass them to the relay.

\n

I assume that you are also going to use some sensor and this kind of things.

\n

There's two approach, either you buy each individual "smart" stuff ie sensors relays and so on which will include in each of them a MCU + one feature (for instance relays or sensor or electrovalve or whatever).

\n

Or the most commonly use approach at least in R&D is to drive down the cost by using one or several MCU as needed (for instance ESP32 family) and using them as the base board for the sensors and relays and so on, you would use them to get data from the sensor using and switching outputs such as relays or electrovalves.

\n

If for some reason you would be requiring more that one MCU then we would go with a "gateway" that will act as the brain of the operation. And that gateway board would control your slaves, and all your requests would go through it and the slaves will go through the gateway to get to you.

\n

Usually this gateway board use a slightly bigger and easier to use processor for instance something running able to run Linux as IMX6, IMX8 or raspberry pi or something like that with the proper connectivity and servers one would need. You will choose this board for instance based on you criteria of connectivity, do you need ethernet? Bluetooth? Wifi? RF? GSM? Sigfox? RF? And then it's capacity to run what you will need it to run such as a REST Api, MQTT server, Socket server and so on.

\n" }, { "Id": "3440", "CreationDate": "2018-09-23T15:44:32.490", "Body": "

I was wondering is there's a sensor or some type of sensor that would detect itself if it's been moved, say if I put the sensor in one end of the table and move it to the other end it will detect that it has been moved?

\n\n

I have seen some sensors that detects motion like PIR but I think that's not what I'm looking for. So if anyone has an idea of the name I would be glad to check it out.

\n\n

Even if it's pretty basic it's ok, like there's a receiver to detect the motion or whatever.

\n", "Title": "Some type of movement sensor", "Tags": "|sensors|", "Answer": "

It sounds like you are looking for something like an accelerometer, or the more complex version, a IMU (inertial measurement unit).

\n\n

These sensors measure acceleration so can detect when an object goes from being at rest (stationary) to moving. They normally measure in 3 axis so can be used to tell in which (relative) direction the object has moved.

\n\n

An IMU also includes a gyroscope so can tell if the object rotates as well as moves.

\n" }, { "Id": "3459", "CreationDate": "2018-09-27T07:35:59.507", "Body": "

Currently in my project in which the controller(client) sends sensor data to the server and receives feedback from the server with some additional data, uses MQTT protocol for communication. It has 2 separate topics for client and server.

\n\n

For Example:

\n\n

Topic1 - Client(SUBSCRIBES), Server(PUBLISHES)
\nTopic2 - Client(PUBLISHES), Server(SUBSCRIBES)

\n\n

But if this project is a use case of a larger application, let's say some 5000 devices need to be installed somewhere.

\n\n

So, will it be needed to create 5000 different topics for both client and server? Or with lesser topics it can be done and how?

\n", "Title": "Is it necessary to create a x number of MQTT topics for x number of devices?", "Tags": "|mqtt|", "Answer": "

From client to server you may pack client-id into payload, e.g. if it is JSON, one of the keys can have client-id value.

\n\n

Response from server to client should contain client-id in order to broker to not to broadcast message, but to send it directly to one connected client.

\n\n

At the same time you may subscribe your server to something like: \"requests/+\" and each client will publish to \"request/{client-id-1}\", \"request/{client-id-2}\", and server will receive both with just one subscription.

\n" }, { "Id": "3460", "CreationDate": "2018-09-27T09:27:34.013", "Body": "

I have a 30W TR\u00c5DFRI driver for the kitchen top lights, and I want to control them with a TR\u00c5DFRI dimmer. That works like a charm.

\n\n

What doesn't work, is to add the TR\u00c5DFRI driver to the TR\u00c5DFRI app so I could use my smartphone to control the kitchen top lights through the driver.

\n\n

Pairing the dimmer to the TR\u00c5DFRI app works, but not the driver. I even tried resetting the dimmer and the driver, but to no avail.

\n\n

Any recommendations to make the TR\u00c5DFRI driver visible in the TR\u00c5DFRI app (and Apple Home)?

\n", "Title": "Can't connect IKEA TR\u00c5DFRI driver to TR\u00c5DFRI app", "Tags": "|apple-homekit|ikea-tradfri|", "Answer": "

You can use the dimmer to added a driver to the app.

\n\n

Follow the steps in the app with the dimmer already added before doing this:\nTake the dimmer over to driver and hold the pairing button down for about 10seconds on the dimmer right next to the driver (driver must be powered on). The red led will like up, do not let go of the pairing button until the red led turned off. The app should have a pop up saying you have added a new device. If you have a lighting product already connected to the driver during the setup, it you flicker or pulse on/off for a confirmation to let you know the driver has been registered.

\n" }, { "Id": "3465", "CreationDate": "2018-09-27T18:56:48.537", "Body": "

We are thinking of using MQTT and I was wondering if there was a standard for topic dictionaries a sensors/devices - kind of like a MIB file for SNMP?

\n\n

Are these topic dictionaries published to a central repository?

\n", "Title": "Are there standard (topic) dictionaries for MQTT-capable devices/sensors?", "Tags": "|mqtt|", "Answer": "

Agree with hardillb's answer. There is no central repository. To add:

\n\n

MQTT is just the transport on top of which you can layer any other protocol. This is very immature, we only know of a couple of somewhat standard protocols:

\n\n
    \n
  1. Sparkplug https://s3.amazonaws.com/cirrus-link-com/Sparkplug+Topic+Namespace+and+State+ManagementV2.1+Apendix++Payload+B+format.pdf\nbeing de-facto standardized by Eclipse https://projects.eclipse.org/proposals/eclipse-tahu

  2. \n
  3. LWM2M-MQTT https://wiki.eclipse.org/images/e/e1/LWM2M_MQTT_EclipseIoTDaysGrenoble.pdf

  4. \n
  5. Each of the cloud IoT platforms (AWS, Azure, etc) has their own topic namespace and protocol.

  6. \n
  7. many ad-hoc implementations. Just subscribe to # on any of the public MQTT brokers (iot.eclipse.org, broker.hivemq.com, etc).

  8. \n
\n" }, { "Id": "3471", "CreationDate": "2018-09-28T12:46:12.023", "Body": "

As I understand more I WILL edit this question. For now, I am guessing at what I need. To make it easier for people to help, I'll tell you the over all purpose:

\n

I have programmed an ESP8266 to advertise it is the TV and that it can turn the TV on / off. The ESP8266 actually transmits the absolute on / off codes to the TV using IR signals. I believe I have added a second "advertisement" for yet another on / off feature to the same ESP8266 device.

\n

However, what I really want to add is a "relative volume" device. I believe I need to do this by using XML. That is, I believe I need to modify the XML transmitted to Alexa to not only advertise the on / off device but to also advertise a relative volume device.

\n

Where can I find examples where a relative volume device is advertised to Alexa?

\n

To clarify my objective, let me add an example:

\n

If I say

\n
\n

"Alexa, turn on the TV"

\n
\n

the TV will turn on. But, if I say

\n
\n

"Alexa, turn up the volume on the TV"

\n
\n

Alexa will respond

\n
\n

"TV does not support that"

\n
\n

I started by using the code here in this github.com project and added additional code to handle transmitting the IR signals to the TV. This project appears to transmit this XML in response to an Alex asking for what the ESP8266 is capable of doing:

\n
HTTP.on("/eventservice.xml", HTTP_GET, [](){\n    Serial.println(" ########## Responding to eventservice.xml ... ########\\n");\n      \n    String eventservice_xml = "<scpd xmlns=\\"urn:Belkin:service-1-0\\">"\n        "<actionList>"\n          "<action>"\n            "<name>SetBinaryState</name>"\n            "<argumentList>"\n              "<argument>"\n                "<retval/>"\n                "<name>BinaryState</name>"\n                "<relatedStateVariable>BinaryState</relatedStateVariable>"\n                "<direction>in</direction>"\n                "</argument>"\n            "</argumentList>"\n          "</action>"\n          "<action>"\n            "<name>GetBinaryState</name>"\n            "<argumentList>"\n              "<argument>"\n                "<retval/>"\n                "<name>BinaryState</name>"\n                "<relatedStateVariable>BinaryState</relatedStateVariable>"\n                "<direction>out</direction>"\n                "</argument>"\n            "</argumentList>"\n          "</action>"\n      "</actionList>"\n        "<serviceStateTable>"\n          "<stateVariable sendEvents=\\"yes\\">"\n            "<name>BinaryState</name>"\n            "<dataType>Boolean</dataType>"\n            "<defaultValue>0</defaultValue>"\n           "</stateVariable>"\n           "<stateVariable sendEvents=\\"yes\\">"\n              "<name>level</name>"\n              "<dataType>string</dataType>"\n              "<defaultValue>0</defaultValue>"\n           "</stateVariable>"\n        "</serviceStateTable>"\n        "</scpd>\\r\\n"\n        "\\r\\n";\n            \n    HTTP.send(200, "text/plain", eventservice_xml.c_str());\n});\n
\n

I assume, in order to support (offer up to Alexa) relative volume control, all that needs to be done is add a description of the volume control feature to the above XML. However, I have not been able to find out how to do that.

\n", "Title": "How does an ESP8266 \"advertise\" it can handle Alexa \"relative volume\" commands", "Tags": "|alexa|esp8266|", "Answer": "

This question generated considerable interest (7 as of this writing). So I am posting a followup answer with a local only solution.

\n\n

I have accepted @hardillb answer as I have yet to find a method allowing Alexa to control relative Volume using a local only device.

\n\n

However, there is a way to control relative TV sound levels using a local only device. By using a device name like \"TV sound\" and phrases like \"Alexa, turn up the TV sound\", Alexa can be coaxed into thinking it is turning up and down the brightness of a device called \"TV sound\". In accepting this approach we are forced to use Alexa's absolute brightness levels while trying to control a relative sound level TV. The first thing we notice is that we can only turn down TV sound a few times before we exhaust Alexa's brightness range (Alexa jumps about 25% for each dimming command). But we can also tell Alexa the brightness our device is set to at the end of each command. If we tell Alexa the brightness is always 50% then Alexa will always respond with more than 50% when we tell Alexa to \"turn up TV sound\" and less than 50% when we tell Alexa to \"turn down TV sound\".

\n" }, { "Id": "3474", "CreationDate": "2018-09-28T23:09:06.100", "Body": "

I\u2019m looking forward to develop a handheld device that could communicate with PCs over WiFi network, Bluetooth and USB. I have looked at the forums about Rpi, and they say it does not support Usb communication with PC since they are both masters. So my question is what are some boards that have equivalent specifications to Rpi 3+ or Banana Pi m2/3 that allow USB/Bluetooth/WiFi communication with PC?

\n\n

Edited:\nLimit those devices to those that capable to run Linux/Ubuntu/Raspbian OSs

\n", "Title": "SBC similar to Banana/Raspberry Pi with USB comms", "Tags": "|raspberry-pi|linux|usb|banana-pi|", "Answer": "

In order to use USB gadget mode, you either need two USB controllers native on the device (so one can act as master, and one as slave), or all of the other peripherals (Bluetooth, Wifi, Ethernet, Keyboard, etc) will need to be connected by serial, SPI or something similar.

\n\n

Since the 'normal' use case of a single board Linux computer will be to support some extra peripherals over USB, the two USB interface is probably as common as the very cut-down approach of the Pi-Zero. This port may be described as 'USB on the go', and you need to check that it will operate at the same time as the other peripherals that you need.

\n\n

You will probably find a wider range of microcontroller parts which support this peripheral combination, but to use these you would need lower-level software (including interfacing with the USB stack at some level - maybe a serial over USB).

\n" }, { "Id": "3479", "CreationDate": "2018-09-29T18:52:41.137", "Body": "

I am configuring a Sonar Distance sensor to send data to a cloud. \nBut the problem for me now is: I do not know when to send the data to this cloud.

\n\n

As far as I know, I could send the data continuously every second. But in my case, the sensor does not always \"sense\" the signal. So sending data like the above method seems to be wasteful on network bandwidth.

\n\n

I want to ask: Normally, with this kind of sensor; How do people often configure to send the data to a cloud?

\n", "Title": "How rapidly should the data from IoT devices be sent to the cloud?", "Tags": "|sensors|data-transfer|cloud-computing|", "Answer": "

Sending packets at a high rate 'feels' wrong, and does increase the chances of seeing some odd effects when the network latency spikes occasionally. In this instance, the fact that you're in a development cycle means that any period much above 10-30 sec will start to become painful to debug or adjust.

\n\n

Batching up the readings is one option, or you can decimate (sample rate reduce) your readings before uploading them (depending on the application). You can also apply a rate-adaptive approach if there are some events that you want to be able to post-process more precisely.

\n\n

At the highest level of 'not much change', you still probably want to see that the endpoint is alive, so even if you only used daily averaged/processed data, making an upload every hour might make some sense. If you want to measure occupancy, maybe anything over a 10% change would justify an instant update.

\n\n

If you do anything other than a simple periodic update, consider what the worst case fault condition might look like. Even though a broken sensor won't burn through your monthly broadband allowance, it could cause some effects on your LAN which are inconvenient. It's always useful to think about the extra steps which you would take if this project made it to volume deployment, or if it becomes an exploit target.

\n" }, { "Id": "3487", "CreationDate": "2018-10-01T11:08:20.460", "Body": "

I am working on IoT project in which I need to send alerts to users based on rules which already defined by user like if Temperature value matches certain condition then send alerts to users and their are multiple conditions.
\nI achieved to send the alerts to users when condition matches by using following steps:

\n\n
    \n
  1. Store threshold values, condition of a device in mysql.

  2. \n
  3. When the device data comes to server I checked the current value with given condition with threshold value and send the alert.

  4. \n
  5. And also there are multiple conditions associated with devices so I need to check each and every condition.

  6. \n
\n\n

Is there any technology that I can use in my project.

\n", "Title": "Handle real time rule based events generated by IoT devices", "Tags": "|monitoring|", "Answer": "

you describe node-red, a free input-processing-output app well-suited for IOT. It lets you drag and drop many forks and conditionals to your info flow. It support mqtt, sockets, and http out of the box. If you need more power, you can write complex JS functions with a central state to supplement the GUI-based tools.

\n" }, { "Id": "3508", "CreationDate": "2018-10-07T18:39:28.797", "Body": "

I have 3 TR\u00c5DFRI 30W drivers in my kitchen, and succeeded in pairing all of them to the same remote. However, if I pair a lamp to a 2nd remote, it stops responding to the first one. I was hoping to use 2 remotes, to get 2-way control.

\n\n

From the documentation, one remote can control 10 lamp drivers, but it isn't clear to me if only one remote can drive a lamp at once. If it's possible, is there a reliable way to set it up?

\n\n

I don't have a gateway. If I have a gateway, can I use IFTTT to cross connect two driver/remote pairs or subsets to keep the drivers in sync in the way that I could do with SonOff?

\n", "Title": "TR\u00c5DFRI lights and multiple swiches", "Tags": "|ikea-tradfri|", "Answer": "

Really simple. Just hold down both Remote pair button at the exact same time next to each bulb you want to pair with for 10s. Boom!

\n" }, { "Id": "3518", "CreationDate": "2018-10-09T14:42:46.970", "Body": "

We've some method to run the MatLab code in esp8266 microcontroller. We can manipulate Arduino pins using Simulink in MatLab.

\n\n

Can we do the same with an esp32 microcontroller as their code can be executed in Arduino IDE?

\n\n

In Arduino, we are only able to execute Simulink or its own code at a time. Will I have the same problem here with NodeMCU ESP32-WROOM-32D?

\n\n

Datasheet for the above mentioned microcontroller

\n", "Title": "Can I use MatLab code in esp32?", "Tags": "|communication|arduino|esp32|", "Answer": "

Waijung 2 for ESP32 is what you need exactly.

\n

Waijung 2 for ESP32 is an Embedded Coder Target specifically for ESP32 microcontroller family.

\n

Not only it can generate C code from your Matlab and Simulink blocks, it also supports advanced features such as Wifi External Mode simulation, allowing you to tune parameters and monitor signals from connected ESP32 hardware in real time, and much more. You can learn and take the full benefits of Model Based Design using affordable, popular, and powerful hardware.

\n" }, { "Id": "3520", "CreationDate": "2018-10-10T09:39:01.777", "Body": "

This question is intentionally rather open ended, and potentially opinion based, but it is intended to act as a catch-all for the questions on how to select a device for a sensor/endpoint. Any question which intends to be more specific would need to start with assumptions about all of these points.

\n\n

Question: In addition to the points below, how would someone go about selecting a good device for the sensor/endpoint part of an IoT system?

\n\n

There are already good questions 1, 2, 3 on how to select specific devices for a well-defined application, and questions that address some of the points below in detail.

\n\n

There are a number of clear factors which will help to determine which devices are suitable for a particular application. In the end, there are likely to be many good choices, and no obvious 'best'.

\n\n\n", "Title": "How to decide how to select an endpoint device", "Tags": "|sensors|microcontrollers|system-architecture|", "Answer": "

Let me answer this in a slightly frivolous way, better answers welcome.

\n\n

After considering all the above, chose:

\n\n\n\n

Come next year, you might make a different choice for the same problem.

\n" }, { "Id": "3536", "CreationDate": "2018-10-17T06:56:21.000", "Body": "

My question might be a little strange but I can't find any answer to it.\nI'm designing a simple IoT system that has some devices as a client and a server that controls these clients like reading sensors, sending commands, etc. In the communication side, I can use any internet-based protocols like HTTP, UDP, TCP, etc.

\n\n

On the other side devices using cellular network 2G to connect to the network which has a low bandwidth. Is there any standard message structure between client and server?

\n\n

For example, If I want to set an led on a device I can send led=1 or I can use a JSON-based structure like {led: 1}. But I have a very low bandwidth and I want to use a simple structure that uses compact size. Is there any standard at all?

\n\n

A device might have up to 10 sensors and 10 outputs and I want to get values as fast as possible.

\n\n

I know I can compress my messages but I need a robust and compact message structure.

\n", "Title": "Standard message structure for communication in HTTP", "Tags": "|communication|", "Answer": "

Firstly, if you use HTTP then it is very likely the HTTP headers will dwarf any message you are actually sending. A low overhead protocol like MQTT may be better suited if one of your key aims is to reduce bandwidth usage.

\n\n

As for the formatting of the data it comes down to the type of values being sent and if you need the data to be human readable at all times.

\n\n

If you want really tight packing then something like Protocol Buffers but you'll need an encoder/decoder on each end to turn it back into human readable values.

\n" }, { "Id": "3540", "CreationDate": "2018-10-19T08:47:26.720", "Body": "

Please guide with the steps and program to turn On/Off the light using custom skill of Alexa as I want to use invocation command if custom skill. Please guide me as I am very new to this topic.

\n\n

I have implemented the smart light control using the amazon smart light document.Coded in Lambda function and in Samsung smart light developer console using the client id and client Secret.

\n\n

Now I want to add invocation command, so want to proceed with custom skill.

\n", "Title": "Steps to turn On/OFF the light using custom skill of alexa", "Tags": "|alexa|", "Answer": "

The Smart Home Skill API does not use the \"Alexa, Ask ..... \" syntax. It just uses the name of the device you want to control.

\n\n

This is a very deliberate step, it makes for a much more natural way of speaking to the Alexa for control of devices. e.g.

\n\n
\n

Alexa, turn off the kitchen light

\n
\n\n

It also handles all the entity extraction and translation for different languages for you.

\n\n

If you want to use the \"Alexa, Ask ...\" pattern then you will need to define a skill using the normal (Not Smart Home Skill) API. In this case you will need to use the tools to map out all the possible sentences that the user might use and flag the entities in the sentences.

\n\n

This is lot more work for a way of interacting with a device and should only be done if you really need to do tasks that can not be controlled by the built in Smart Home Skill actions.

\n" }, { "Id": "3550", "CreationDate": "2018-10-23T17:42:55.967", "Body": "

Can I have one device set up with HomeKit and Alexa at the same time e.g. a motion sensor, or a smart plug?

\n\n

I must stress I am asking whether it is possible to set up the device with the two services at the same time.

\n", "Title": "Controlling one device using Alexa and Siri at the same time?", "Tags": "|alexa|ios|apple-homekit|", "Answer": "

Yes, providing the device provides it's own control infrastructure. The model is (roughly):

\n\n\n\n

The exception to this model might be where a device uses one of your system hubs to provide it's connectivity. In this case, you're relying on one integrator exposing locally connected devices to a competitor.

\n\n

You can refer to this question for a general idea of how devices connect together. An endpoint may have a direct network connection (for example if it has WiFi), in which case many services can establish a connection. If the endpoint only uses bluetooth or ZigBee then some sort of hub is needed. Of course, this hub can be a device you provide if the protocols are open.

\n" }, { "Id": "3558", "CreationDate": "2018-10-28T16:39:13.477", "Body": "

I'm trying to setup a smart home and I know that I need couple of plugs and 2 switch and some lamps.\nWhat I don't know is what extra do I need and how can I join them together?

\n\n

After some research, my conclusion was that products marked as Zigbee can work with products from other producers.

\n\n

So the first question is how?

\n\n

Then after some other research, it seemed like xiaomi has the best collection so I could ignore the risk and just go for all from same brand but for some it still needs a controller and the controller itself and the plugs are all Chinese socket compatible and I couldn't find any adapter cheaper than $10 that is similar to Chinese (it says Australian to EU adapter, not sure if it works for Chinese). Meaning that the adapter is going to cost almost the same as the plugs and still not sure if it's going to work.

\n\n

I thought of buying the plugs from another brand and so asked the question below some days ago. Still no answer.\nCan I use Xiaomi Smart Light Switch or other smart switches to control devices from other producers?

\n\n

So now the question is if you know a combination of devices that you know will work and save me from this wondering

\n", "Title": "Which items work together for a smart home? (In Europe)", "Tags": "|smart-home|smart-plugs|smart-lights|", "Answer": "

If you just want lights and sockets, IKEA have just added Smart Sockets to their Tr\u00e5dfri range.

\n\n

They have a range of different bulbs and include on/off control, dimmers and multifunction (dim/on/off/colour temp). The whole system runs on Zigbee and can include an optional hub that allows control via an App/Alexa/Google Assistant/Apple Homekit.

\n\n

The hub also provides a CoAP interface so you can add some DIY control.

\n\n

Prices are (IMHO) pretty cheap.

\n\n

They support US/UK/Euro socket types and threaded bulbs.

\n\n

The bulbs are known to work with Philips Hue systems if neded.

\n" }, { "Id": "3559", "CreationDate": "2018-10-28T17:51:04.673", "Body": "

I have a few devices which ultimately talk to an MQTT bus. This bus is monitored by my own program (in Python) which makes decisions based on the context (\"scenarios\").

\n\n

I am considering to add a Google Home speaker to this (I do not own one yet) and i am wondering whether it is possible to connect it to my system.

\n\n

I imagine that there is a need to

\n\n\n\n

Is this at all possible for DIY orchestrators?

\n\n

If so - is there a reasonable documentation for this? I searched in Google and surprisingly I didn't not find anything (I am used to its API docs as I retrieve Calendar and Directions informations from there). There is quite a lot of advertisement on what it can do and all the devices it can connect to but nothing API-like.

\n\n

I initially thought that Actions would be the way to go but it looks like this is a way to extend Google Assistant (and Google Home) to new actions. My actions are (so far) quite standard - it is rather the \"where to apply them\" which I do not know how to approach.

\n", "Title": "How to connect Google Home with a DIY home automation system?", "Tags": "|mqtt|google-home|google-assistant|", "Answer": "

The API for this sort of thing is here

\n\n

Google Assistant lets you write Smart Home Actions which let you add your devices to Model and then pass messages to your backend to then control the devices.

\n\n

Unless you want to end up writing a LOT of code, do a load of testing and then get it approved by Google, you don't want to try and do this from scratch. Using an existing Open Source framework like Home Assistant that supports Google Assistant. Home Assistant also supports MQTT.

\n\n

At some point I'll get round to finishing my Node-RED Google Assistant Smart Home node to go with my Amazon Alexa version.

\n\n

Edit:

\n\n

My Google Home Action for Node-RED is now live here

\n" }, { "Id": "3565", "CreationDate": "2018-10-31T09:34:54.813", "Body": "

I have my Alexa's lambda function on AWS Lambda Console. There I call a web API I created.

\n\n

If I call my web API on Visual Studio Code, it works great. But if I use Alexa Developer Console to call my web API, it always says:

\n\n
\n

Error: connect ECONNREFUSED 127.0.0.1:63713

\n
\n\n

It's because my web API is in localhost? How can I solve this? I'm struggling yet to testing in local with Alexa Developer Console...

\n\n

My code:

\n\n
var url = \"http://localhost:63713/_apis/v1.0/Car/GetCarById?id=1\";\n\nhttp.get(url, function (res) {\n    var webResponseString = '';\n\n    if (res.statusCode != 200) {\n        doWebRequestCallBack(new Error(\"Non 200 Response\"), null);\n    }\n\n    res.on('data', function (data) {\n        webResponseString += data;\n    });\n\n    res.on('end', function () {\n        var webResponseObject = JSON.parse(webResponseString);\n        doWebRequestCallBack(null, webResponseObject);\n    });\n}).on('error', function (e) {\n    doWebRequestCallBack(new Error(e.message), null); // <-- where I recive the error message\n});\n
\n", "Title": "Alexa - call web API in localhost from AWS Lambda Console", "Tags": "|alexa|aws-iot|aws|", "Answer": "

localhost always points to the machine the code is running on.

\n\n

In this case the lambda is running on one of Amazon's machines so the web app you are trying to access will not be there (as it's running on your machine).

\n\n

You will need to deploy your web app to somewhere public (e.g. a AWS VM or Light sail instance) and update the lambda to point to that location.

\n" }, { "Id": "3578", "CreationDate": "2018-11-02T21:42:22.407", "Body": "

The sun component has sunset and sunrise events, but I would like to trigger on \nsolar noon. This page tells me I can use the elevation for this. What number should I enter to achieve what I want?

\n", "Title": "How to trigger on solar noon in Home Assistant?", "Tags": "|smart-home|home-assistant|", "Answer": "

The most obvious answer would be to use the azimuth parameter, and check for 180 (south). The sun component also has a 'rising' state (before noon), and next_noon time.

\n\n

If you follow the link to the US Naval observatory, you can print out a table of solar position for any particular location, on any particular date. For my location (52N), I see the sun crosses the horizon around 7am, 4pm at this time of year, and reaches a maximum elevation of 21.5 degrees. In the middle of the summer at the same location, I get 4am, 8pm and 61 degrees elevation.

\n\n

There is no simple calculation in this case. The zero points are constant as being the start and end of the day (from which you can pick earlier or later references), but the elevation does not make any correction for your location or the time of year.

\n\n

Regardless of how you determine the correct angle for 'noon', you would need to repeatedly update this based on the date.

\n\n

A better approach might be to determine the relevant time of day based on location, since a constant UTC time might be a close enough approximation. Here, 12:00 is good enough (but I have the meridian just a few miles away).

\n" }, { "Id": "3583", "CreationDate": "2018-11-05T15:25:44.497", "Body": "

Not sure what caused it, but my water softener stopped connecting to the Iris app recently. Has anyone found an easy solution?

\n", "Title": "\"No connection\" reported on Iris By Lowe's Android app for WIFI-enabled Whirlpool Water Softener (WHESCS)", "Tags": "|smart-home|wifi|", "Answer": "

Although the Lowe's troubleshooting steps recommend re-pairing the unit to the Iris hub via WIFI, I found a much easier method.

\n\n

Lowe's recommends tapping the Tank Light five times on the water softener unit itself to reset this WIFI-enabled device, then follow the pairing instructions on the Iris app to repair the device (see video instructions at https://www.youtube.com/watch?v=uhJQXagqjTg).

\n\n

However, I tried just unplugging the power cord from the wall socket for ten seconds and then plugging it back in, and that fixed it right up without having to go through the complex re-pairing process!

\n" }, { "Id": "3592", "CreationDate": "2018-11-07T15:08:08.363", "Body": "

I have an Embedded Board where the distribution on it is a custom Linux Distribution based on Yocto.

\n\n

I have added ntp and via ntpd I will sync time with the common ntp pools either via UMTS 3G dongle or Ethernet.

\n\n

This board along with some ESP32 PoE board by Olimex are connected via an Unmanaged Switch.

\n\n

Purpose

\n\n

The ESP32 boards have sensors that collect information add a timestamp to them and send it to the InfluxDB running on the main embedded board via Ethernet making it a Wired Sensor Network. These ESP32 boards also have an RTC DS3231 on them so I want them to first get the time from a NTP Server running on this embedded board to sync the RTC and then send information to the InfluxDB.

\n\n

Questions

\n\n
    \n
  1. How does one create an NTP server on the Embedded Board? Can I add a line in the ntp.conf file that can be used to step up a server with for e.g. NTP server at 192.168.4.11? Using this IP address in my arduino code I can ask for the timestamps

  2. \n
  3. In case of testing, If I somehow setup a NTP server on the Embedded Board, how can I initially test the time coming from it? Is there a command line utility to poll the NTP server and see if the time coming is correct or not on a regular computer?

  4. \n
\n", "Title": "Make Embedded Board NTP Server for Arduino boards in the subnet", "Tags": "|arduino|linux|", "Answer": "

Precise Description for the Query

\n\n

Initial NTP time sync

\n\n
    \n
  1. In the Yocto build system under conf/local.conf add ntp recipe as follows:

    \n\n
     IMAGE_INSTALL_append = \" ntp\"\n
  2. \n
  3. on Target board initially stop the systemd service:

    \n\n
     systemctl stop ntp\n
  4. \n
  5. Assuming board is connected to the internet:

    \n\n
      ntpd -gq\n
    \n\n

    Info: Check time using date

  6. \n
  7. For safe side also sync the Hardware clock to NTP time:

    \n\n
     hwclock -w --localtime\n
  8. \n
  9. Restart the systemd service

    \n\n
     systemctl start ntp\n
  10. \n
\n\n

Setup a local NTP Server on Embedded Board

\n\n
    \n
  1. Stop the systemd service:

    \n\n
     systemctl stop ntp\n
  2. \n
  3. Edit the /etc/ntp.conf to make the embedded board broadcast the NTP timestamps on port 123. Add the following line:

    \n\n
     # Here the IP Address could that of your board but make sure to use\n # Broadcast address (x.x.x.255) and if you have a larger network\n # select your subnet masks accordingly\n broadcast 192.168.1.255 \n
  4. \n
  5. Restart the systemd service:

    \n\n
    systemctl start ntp\n
  6. \n
\n\n

Achieving timesync with Sensor Nodes

\n\n
    \n
  1. Assuming one has boards that can be programmed with Arduino; download the NTPClient Arduino Library.

  2. \n
  3. In your Sketch use the NTPClient constructor to connect to your Local NTP server via its IP address

    \n\n
    NTPClient timeServer(ntpUDP, \"192.168.1.123\", 0, 60000);\n
  4. \n
\n\n

and obtain the timestamps from the Local NTP Server

\n\n

References

\n\n\n" }, { "Id": "3599", "CreationDate": "2018-11-09T22:22:29.667", "Body": "

Is there a common standard/practice for having an IOT device establish a connection to a command and control server, and then act in a server role (i.e. the C&C sends requests to the device and the device sends back responses)? Something in the vein of reverse HTTP or RPC.

\n\n

EDIT:\nan example use case: The device is behind a NAT gateway and the C&C is unable to initiate a connection to it. We want to send a \"ping\" message the device (to see if it's on and healthy or something) and receive a \"pong\" in reply.

\n", "Title": "Reverse rpc/device control", "Tags": "|communication|", "Answer": "

Most of the major provider of mass IoT services (AWS, Microsoft, IBM) seem to have settled on MQTT.

\n\n

The MQTT broker runs in the cloud and the devices connect out the broker (this gets round the NAT problem) and then subscribes to topics on which messages are published. Topics can be general or specific to the device/client.

\n\n

The protocol also has a built in keep alive checking to determine if the device is still working and the broker can publish a special message (Last Will & Testament) if the device goes offline unexpectedly.

\n" }, { "Id": "3605", "CreationDate": "2018-11-13T01:34:01.033", "Body": "

The speaker in the video Richard Lansdowne - LoRa Geolocation mentions straight-off that he won't be quoting an accuracy for geolocation using LoRa, and it's of course not the ideal signal you would use.

\n\n

A comment below the video says:

\n\n
\n

Excellent Richard, you nailed it! You've succinctly described the IoT asset tracking problem and what we need to do going forward to make this real.

\n
\n\n

So far I can't figure out, at least from the video, how this might actually be done using standard LoRa and perhaps LoRa-WAN. How would LoRa protocol estimate distance in order to triangulate, assuming that's how this works? What would be an approximate best-case accuracy based on hardware limitations? (of course your milage may vary)

\n", "Title": "How does LoRa based geolocation work? How does it measure distance?", "Tags": "|lora|lorawan|geo-tagging|", "Answer": "

LoRaWAN localization uses TDOA: Time Difference Of Arrival.

\n\n

Gateways with the precise timing extension are all synchronized via GPS. When a device sends a frame, the very accurate time at which the frame was emitted is unknown, but by recording the exact time each of several gateways receive that frame (compared to GPS time), and knowing the position of gateways, you can calculate the position of the device.

\n\n

On the gateway side, that requires hardware capabilities beyond what a SX130x offers, a clear view of the sky, some calibration and a clever solver somewhere is a server room.

\n\n

On the device side, that requires nothing.

\n" }, { "Id": "3610", "CreationDate": "2018-11-14T12:40:46.403", "Body": "

I have an IoT solution that I need to connect to from remote. \nI don't have access to open ports nor set static IPs on-site where the IoT devices will be hosted so I was looking for other solutions.

\n\n

An Idea is to use a VPN-box where I connect the devices to, and then connect the VPN-box (router?) to the on-site ethernet.

\n\n

Would I then be able to connect to the devices from a raspberry pi (stored off-site) and would be able to access them with internal IPs since it is on the same VPN network?

\n\n

In theory this sounds like a plausible solution \u2013 but I am not sure if I am missing something. So I am looking for an answer if this is even possible before purchasing VPN routers and licenses.

\n\n

Would it work? Is there an even better solution for this?

\n\n

The IoT devices are two I/O remotes (Moxa) which I am connecting to via python using pymodbus.

\n", "Title": "Is static IP needed if using VPN to connect to IoT devices?", "Tags": "|raspberry-pi|networking|hardware|wifi|", "Answer": "

Yes, that should work. You will need a static way to address your VPN server (no need to pay license fee, just use OpenVPN), but this could be a AWS instance with the DNS entry.

\n\n

But it sounds like you don't control the network that this will all be connected to. You need to talk to the owners of this network and explain what you are doing as you are opening up a way to remotely access their network (the VPN box needs access both to the sensors and the local network) if the external end of the VPN should become compromised. They may want to place your VPN device in a DMZ so it has no access to the rest of the internal network.

\n\n

This type of network setup has been used to compromise large organisations in the past. E.g. the Casino fish tank monitoring software and A large supermarket that had remote monitoring of it's fridges.

\n" }, { "Id": "3619", "CreationDate": "2018-11-17T15:06:20.850", "Body": "

I am interested in getting a Smart Home system with a good voice assistant, but my wife refuses to allow a voice assistant like Google Home or Amazon Alexa because she does not want ANY recordings from our home being stored in the cloud.

\n\n

For example, this link discusses how Alexa stores requests in the cloud for machine learning of the Alexa system.

\n\n

Is the Amazon Echo 'always listening' and sending data to the cloud?

\n\n

Is there any option here? Is there a smart home system that does not store recordings off-site? Any good options to ensure privacy and security with these systems?

\n\n

She's also worried about the ability of the government to eavesdrop. The above link quotes Intellihub: \"Echo ... can be easily hacked and used by government agencies like the FBI to listen in on conversations.\"

\n\n

One respondent suggests using the Mute on the bundled remote when not in use: is that sufficient, or could a good hack bypass the Mute request?

\n\n

Bottom-line: do I have to give up some Privacy & Security to get a Smart Home system?

\n", "Title": "Privacy with Voice Assistants", "Tags": "|security|amazon-echo|google-home|privacy|cloud-computing|", "Answer": "

Keeping the device muted when not in use removes a huge amount of the benefit of a voice assistant, if you have to get up, walk across the room and unmute it to use it you might as well just install the app (Alexa app or an Android with Google Assistant) on your phone and only launch it when you want it.

\n\n

If you want a dedicated device that doesn't use the cloud then there are projects to roll your own (e.g. http://jasperproject.github.io/) but remember that the benefit of the cloud are:

\n\n\n" }, { "Id": "3635", "CreationDate": "2018-11-21T13:04:54.857", "Body": "

I am learning MQTT in python and the protocol for QOS = 1 and 2. I'm concerned about my Raspberry Pi getting too bogged down, or if there are other unexpected problems at the server. I can see there are limits for queued and inflight messages as shown below.

\n\n

So far I haven't been able to read about or understand what happens to my newest published message if these queues hit either of their limits. I would guess that the newest message gets top treatment and the oldest are dropped when a limit is reached, but I can't find that stated explicitly.

\n\n

Is that in fact what will happen?

\n\n

\"Bonus points:\" Is there a way to learn which Message IDs are still held in the queue?

\n\n
\n\n
import paho.mqtt.client as mqtt\nclient = mqtt.Client(\"I am your client\")\n
\n\n

Reading about qos 1 and 2 \nhelp(client.max_queued_messages_set) yields

\n\n
\n

Set the maximum number of messages in the outgoing message queue. 0 means unlimited.

\n
\n\n

and

\n\n

help(client.max_inflight_messages_set) yields

\n\n
\n

Set the maximum number of messages with QoS>0 that can be part way through their network flow at once. Defaults to 20.

\n
\n", "Title": "MQTT messages hit queued or inflight limits, is it stated somewhere it's the oldest messages that are dropped?", "Tags": "|mqtt|paho|", "Answer": "

The code that handles the max_queued_messages is here

\n\n
if self._max_queued_messages > 0 and len(self._out_messages) >= self._max_queued_messages:\n    message.info.rc = MQTT_ERR_QUEUE_SIZE\n    return message.info\n
\n\n

This looks like it does not bump any messages out of the queue and the it is up to you to handle storing this new message if you still want to keep it.

\n\n

The code for the max_inflight_messages is here.

\n\n

This will queue a message if there are currently too many inflight messages.

\n\n

Since the message queue test is done first there will always be room to queue the new message if the inflight limit is hit.

\n" }, { "Id": "3668", "CreationDate": "2018-12-04T18:09:05.917", "Body": "

I am wondering how the DHT 22 (measures humidity and temperature) sensor would work under double or triple atmospheric pressure (rough calculations have been done but it will vary and the exact density being dealt with is not currently known).

\n\n

The sensor is being considered to be used for a project I am involved with where the temperature of grain in a bin would be monitored to help prevent the spoiling of the grain.

\n\n

The project's small GitHub is located here https://github.com/PhysicsUofRAUI/binTempSensor , I don't think any more useful information is there but maybe I'm wrong.

\n\n

The project has a few different ways to communicate (only a few may eventually be made), but it will all be done at atm pressure so it is irrelevant in this question.

\n\n

I am also curious in general about how sensors would act in these situations, so if anyone has any knowledge they want to share that'd be awesome.

\n\n

Would it cause parts of it to break? Could other adverse things happen?

\n\n

Cheers and thanks!

\n", "Title": "Sensors under increased pressure", "Tags": "|sensors|", "Answer": "

The correct phrase is not pressure, but external force applied on top of the sensor.

\n\n

One way to verify (if you can afford) is to actual test that external load on its readings.

\n\n

Other way is to install it in a casing that will absorb external load. This casing needs to be with holes to able temp and humidity go through.

\n" }, { "Id": "3682", "CreationDate": "2018-12-11T15:03:13.917", "Body": "

I must develop an iPhone app to monitor a device.

\n\n

The device is a boat light control system. Using the iPhone app I can control lights on/off. Lights can be turned on/off from the device touchscreen or using the iPhone app.

\n\n

It's important that the application is notified of lights status change (someone has turned on a light form wall interrupter or from the device touchscreen). But is important that device/iPhone status update procedure is power optimized (I don't want to drain the iPhone battery using polling to continuously read status from the device). It's not a problem to add a Bluetooth to the device if this can optimize the system.

\n\n

Device and App are on the same LAN (the solution must work without internet connection).

\n\n

The device expose a rest service to read/change lights on/off status.

\n\n

In the app I need to display in near-real-time the device status changes.

\n\n

Are there any better solution for the device to notify the iPhone app a status change than do a call every n seconds by the App for checking?

\n\n

A kind of local device->iPhone notification? Maybe some feature of homekit may came in help?

\n", "Title": "Near real-time LAN device status without polling", "Tags": "|ios|lan|", "Answer": "

I'm guessing your best bet is actually using BLE beacon and notifications. Why? Because there are very few things that can wake iOS from backgrounded mode, and the list is even more limited if you can't use internet. Beacon and location services is one of them.

\n\n

Here's a good article that talks about how to do this over BLE with all the gotchas. But basically you have to implement iBeacon service, ensure it's correctly waking the phone from even when the app isn't running. This will give you the best battery life.

\n\n

It also flips the mechanism from phone-initated poll, to device initiated notification.

\n\n

In theory, using external-accesory framework and MFI is possible, but it's far more challenging to implement. Implement on the device-side, that is. If you somehow do get your system to act as HomeKit accessory, then Accessory framework registerForLocalNotifications lets you monitor events while backgrounded.

\n\n

For turning your system into a HomeKit-like device, you might try HomeBridge to prototype it.

\n" }, { "Id": "3695", "CreationDate": "2018-12-15T11:45:21.140", "Body": "

I'm considering buying a Google Home Mini, but it's unclear to me if it can actually do things that my smartphone can't do?

\n\n

I currently only have one \"smart\" light bulb, and I figure I'd mostly use a Google Home Mini to set my morning alarm, get news and basic things like that. But with time it would be nice to be able to turn on lights and other things by voice command. I can't really see that I would play music on such a device, so sound quality is not an issue.

\n\n

So my question is - could I just as well use an Android phone to do the same things as a Google Home Mini? I use on as my regular phone, and I also have an old one laying around that could be plugged in and basically only used for this purpose.

\n", "Title": "Can an Android phone do the same things as Google Home (Mini)?", "Tags": "|smart-home|google-home|google-assistant|android|smart-assistants|", "Answer": "

If the phone is new enough to run a version of Android with full Google Assistant then it should work for most things.

\n\n

The main points that differ will be:

\n\n\n\n

Sounds quality from a Google Home mini isn't bad (definitely way better than a phone speaker)

\n" }, { "Id": "3700", "CreationDate": "2018-12-16T14:11:28.627", "Body": "

I have a LED smart bulb (probably a ESP8255 based one) flashed with ESPurna. It was configured to connect to my WiFi network and it does, but unfortunately I forgot the password to connect to it (it presents a Basic Authentication login popup).

\n\n

I have hope that a hard reset would bring it back to factory settings but I do not know how to perform that hard reset.

\n\n

I tried to switch it on (for 4-5 seconds) and off a few times, as read somewhere but it did not do the trick.

\n\n

Is there a standard (or at least expected) way to shortcut some PINs in order to simulate a \"reset button press\"? (I really, really would like to avoid reflashing it because of the tricky soldering)

\n", "Title": "How to reset a ESPurna LED bulb?", "Tags": "|smart-lights|", "Answer": "

Switching it on and off probably won't help as the configuration is probably saved into non-volatile memory. You do not want to reconfigure your device after every blackout or such.

\n\n

Your other chanse would be the hard reset feature added in release 1.6.7 but that would require a button and as per the espurna/config/hardware.h file, the AI-Thinker AI Light does not have a button defined by default.

\n\n
// -----------------------------------------------------------------------------\n// AI Thinker\n// -----------------------------------------------------------------------------\n\n#elif defined(AITHINKER_AI_LIGHT)\n\n    // Info\n    #define MANUFACTURER        \"AITHINKER\"\n    #define DEVICE              \"AI_LIGHT\"\n    #define RELAY_PROVIDER      RELAY_PROVIDER_LIGHT\n    #define LIGHT_PROVIDER      LIGHT_PROVIDER_MY92XX\n    #define DUMMY_RELAY_COUNT   1\n\n    // Light\n    #define LIGHT_CHANNELS      4\n    #define MY92XX_MODEL        MY92XX_MODEL_MY9291\n    #define MY92XX_CHIPS        1\n    #define MY92XX_DI_PIN       13\n    #define MY92XX_DCKI_PIN     15\n    #define MY92XX_COMMAND      MY92XX_COMMAND_DEFAULT\n    #define MY92XX_MAPPING      0, 1, 2, 3\n
\n\n

There is no such thing on the schematic either.

\n\n

All in all you will need a re-flash, either to reset the configuration or to upload a new firmware with a button defined on one of the free GPIOs of the ESP.

\n" }, { "Id": "3714", "CreationDate": "2018-12-23T10:15:02.040", "Body": "

I am doing a home automation project in which I should be able to control the lights on/off but more than that, the brightness of the lights.

\n\n

I am using Home Assistant (not hassbian) and Node-Red along with MQTT.

\n\n

I am using a normal bulb I purchased from a hardware store connected to a Sonoff ESP8266 and am able to use Node-RED to trigger a on and off state but am unsure how to trigger the specific brightness level.

\n\n

Is it possible to control the brightness of any normal light that is made into a 'smart light' through the use of Sonoff? Or must they be devices that have that functionality made into the light itself.

\n\n

(same for any other appliances, such as things such like a normal portable fan made into a smart fan through ESP8266 and controlling its speed.)

\n", "Title": "Controlling normal bulbs brightness using Sonoff devices", "Tags": "|mqtt|esp8266|home-assistant|node-red|sonoff|", "Answer": "

There are devices made specially for dimming using only on-off impulses to set the brightness. A common pattern is to use a short on/off signal for on/off, and a longer on/off signal to increase brightness in n % steps.

\n\n

So provided you can make the Sonoff switch on and off sufficiently fast, and reliably to get the timing right, adding a device like this or any other similar impulse switch with dimmer should work.

\n\n

(Note that I'm not suggesting to pulse-width modulate it - the shortest impulse necessary is ~0,5 sec. The output remains as set, until it receives the next command signal).

\n" }, { "Id": "3730", "CreationDate": "2018-12-27T23:03:36.660", "Body": "

I have MH-Z14 Carbon Dioxide sensor and have been using it to try and detect when a room may need some fresh air. But, I've also noticed that the sensor reading drastically increases when a human is present in a room and especially if close to the sensor itself.

\n\n

I am wondering if anyone tried to use the current CO2 value in a room to detect an approximate number of people in a room and how possible and accurate could it be?

\n", "Title": "Is it possible to use a CO2 sensor to detect how many people are in a room?", "Tags": "|smart-home|sensors|system-architecture|", "Answer": "

It appears some research has been done on this already \u2013 Sensing by Proxy: Occupancy Detection Based on Indoor CO2 Concentration describes a model developed at the University of California, Berkeley to detect occupancy based on CO2 concentration.

\n\n
\n

We propose a link model that relates the proxy\n measurements with unknown human emission rates based on a\n data-driven model which consists of a coupled Partial Differential\n Equation (PDE) \u2013 Ordinary Differential Equation (ODE) system.

\n
\n\n

Their model is apparently more accurate than other machine learning models they tested:

\n\n
\n

The inference of the number of\n occupants in the room based on CO2 measurements at the air\n return and air supply vents by sensing by proxy outperforms a\n range of machine learning algorithms, and achieves an overall\n mean squared error of 0.6569 (fractional person), while the best\n alternative by Bayes net is 1.2061 (fractional person).

\n
\n\n

Algorithm 1 (p. 3) in the paper may give some direction on how to implement a similar system to theirs, which seems to be surprisingly reliable given the simplistic nature of the CO2 sensor.

\n" }, { "Id": "3740", "CreationDate": "2018-12-30T17:50:30.423", "Body": "

I'm using and trying the new Echo Dot generation (3rd Gen) to control a ESP8266 following some tutorials. For now I just want to change the state of a relay using the SetBinaryState action. But after discovering the device, if I try to turn it on, I get the \"Device doesn't support that\" response.

\n\n

The tutorials I've found are for the previous generations, and the packets seem to be different, as I had to fix something in the discovery process. But I can't figure out where's the problem when setting the state as I haven't seen any documentation related to this. This is the setup.xml

\n\n
\"<serviceList>\"\n    \"<service>\"\n        \"<serviceType>urn:Belkin:service:basicevent:1</serviceType>\"\n        \"<serviceId>urn:Belkin:serviceId:basicevent1</serviceId>\"\n        \"<controlURL>/upnp/control/basicevent1</controlURL>\"\n        \"<eventSubURL>/upnp/event/basicevent1</eventSubURL>\"\n        \"<SCPDURL>/eventservice.xml</SCPDURL>\"\n    \"</service>\"\n\"</serviceList>\"\n
\n\n

And these are the logs that I'm getting:

\n\n
Received packet of size 94\nFrom 192.168.1.3, port 50000\nRequest:\nM-SEARCH * HTTP/1.1\nHost: 239.255.255.250:1900\nMan: \"ssdp:discover\"\nMX: 3\nST: ssdp:all\n\n\nResponding to search request ...\n\nSending response to 192.168.1.3\nPort : 50000\nResponse sent!\n\nReceived packet of size 101\nFrom 192.168.1.3, port 50000\nRequest:\nM-SEARCH * HTTP/1.1\nHost: 239.255.255.250:1900\nMan: \"ssdp:discover\"\nMX: 3\nST: upnp:rootdevice\n\n\nResponding to search request ...\n\nSending response to 192.168.1.3\nPort : 50000\nResponse sent!\n\nResponding to setup.xml ...\nSending :<?xml version=\"1.0\"?><root><device><deviceType>urn:Belkin:device:controllee:1</deviceType><friendlyName>Living room light</friendlyName><manufacturer>Belkin International Inc.</manufacturer><modelName>Emulated Socket</modelName><modelNumber>3.1415</modelNumber><UDN>uuid:Socket-1_0-38323636-4558-4dda-9188-cda0e616a12b</UDN><serialNumber>221517K0101769</serialNumber><binaryState>0</binaryState><serviceList><service><serviceType>urn:Belkin:service:basicevent:1</serviceType><serviceId>urn:Belkin:serviceId:basicevent1</serviceId><controlURL>/upnp/control/basicevent1</controlURL><eventSubURL>/upnp/event/basicevent1</eventSubURL><SCPDURL>/eventservice.xml</SCPDURL></service></serviceList></device></root>\n\nResponding to  /upnp/control/basicevent1...\nRequest: <?xml version=\"1.0\" encoding=\"utf-8\"?><s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\"><s:Body><u:GetBinaryState xmlns:u=\"urn:Belkin:service:basicevent:1\"><BinaryState>1</BinaryState></u:GetBinaryState></s:Body></s:Envelope>\n\nResponding to  /upnp/control/basicevent1...\nRequest: <?xml version=\"1.0\" encoding=\"utf-8\"?><s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\"><s:Body><u:GetBinaryState xmlns:u=\"urn:Belkin:service:basicevent:1\"><BinaryState>1</BinaryState></u:GetBinaryState></s:Body></s:Envelope>\n
\n\n

And then tons of these packets:

\n\n
Received packet of size 278\nFrom 192.168.1.1, port 1900\nRequest:\nNOTIFY * HTTP/1.1 \nHOST: 239.255.255.250:1900\nCACHE-CONTROL: max-age=3000\nLocation: http://192.168.1.1:5431/igdevicedesc.xml\nSERVER: Router UPnP/1.0 miniupnpd\nNT: uuid:f5c1d177-62e5-45d1-a6e7-9446962b761e\nUSN: uuid:f5c1d177-62e5-45d1-a6e7-9446962b76\n
\n\n

One thing I've noticed is that the eventservice.xml never gets called, but I'm not sure if that's correct.

\n", "Title": "ESP8266 - Device doesn't support that", "Tags": "|alexa|amazon-echo|", "Answer": "

After fiddling around I figure out that it looks like you have to send the device state after every SetBinaryState request.

\n\n

So, if the ESP node receives a SetBinaryState command, it will have to answer with a GetBinaryState.

\n\n
void Light::_handleCommands()\n{\n  Serial.println(\"Responding to /upnp/control/basicevent...1\");      \n\n  String request = server->arg(0); \n\n  if (request.indexOf(\"SetBinaryState\") >= 0) \n  {\n    if (request.indexOf(\"<BinaryState>1</BinaryState>\") >= 0) \n    {\n        Serial.println(\" - Got Turn on request\");\n        lightStatus= turnOnLight();\n\n        sendLightStatus();\n    }\n\n    if (request.indexOf(\"<BinaryState>0</BinaryState>\") >= 0) \n    {\n        Serial.println(\" - Got Turn off request\");\n        lightStatus= turnOffLight();\n\n        sendLightStatus();\n    }\n  }\n\n  if (request.indexOf(\"GetBinaryState\") >= 0) \n  {\n    Serial.println(\" - Got light state request\");\n\n    sendLightStatus();\n  }\n\n  server->send(200, \"text/plain\", \"\");\n}\n
\n\n

On top of that, it looks that the response has to 'make sense', ie. if a Turn On request is received, it has to return '1' in the GetBinaryState. Else Alexa will say that the device is not responding or it's malfunctioning.

\n" }, { "Id": "3749", "CreationDate": "2019-01-04T14:10:39.677", "Body": "

What can I do about my Google Home Mini not successfully placing calls I try to make to some of my contacts? It works for most contacts I tell it to call, but mysteriously some calls won't go through and the Mini gets stuck, telling me it, "didn't catch that--is that percent one dollars, percent two dollars?"

\n

After many months of searching the Internet for a fix, I still have the same issue. I have not tried any solutions because I've found none yet. It seems to happen to some contacts and not others. Mom and Dad, for example, have the same number listed in my contacts. Google Home finds Dad and calls him, but not Mom. Every time I try it saying "call my Mom", it says, "Didn't catch that. Is that percent one dollar, percent two dollars?" Meanwhile, if I say "call my Dad," it dials their home just fine. If I ask, "what is my Mom's home phone number," it gives me the same number answer as when I ask, "what is my Dad's home phone number." It's the right phone number! So it knows the right thing. Yet Google Home gets stuck when I try to call her and not him.

\n

As a developer, it is clear to me that the Google Home Mini is improperly evaluating some %$1 and %$2 variable references in a scripted prompt within a string variable in its code, but no one has done anything about it. The issue's been reported to Google at I asked Google Home Mini to call someone and it won't work. Still, because of how the issue appears to be tied to the way unique contacts are set up in my address book, I think there ought to be a way that this could be corrected by us, the users--perhaps by changing the way our contacts are set up. I'm putting this out there in hopes that someone has come up with a solution!

\n", "Title": "Google Home Mini won't recognize contact and place a call, saying it, \"didn't catch that--is that percent one dollars, percent two dollars?\"", "Tags": "|google-home|", "Answer": "

A workaround that I have just found today is that usually when the Mini asks, \"is that percent one dollars, percent two dollars,\" it actually already knows the right person you wish to call so just answer, \"YES!\" It then goes ahead and makes the call I wanted to make to the right person I had requested it be made to!

\n\n

Also, if I specifically say, \"OK Google, call my Mother AT HOME,\" it works! It seems that the only time gets stuck saying, \"Didn't catch that. Is that percent one dollar, percent two dollars?\" is when I say, \"OK Google, call my Mother.\"

\n\n

It would appear that the \"%$1\" variable might stand for \"Mother\" and the \"%$2\" might stand for \"HOME\" in this scenario. It's just Google Home is not properly evaluating those variables and matching it with the words in my request. This would explain why it only spits out that cryptic, generic-sounding question, \"Didn't catch that. Is that % one dollar, % two dollars?\"

\n" }, { "Id": "3757", "CreationDate": "2019-01-06T01:32:09.577", "Body": "

I am trying to publish image data to MQTT (CloudMQTT) with following code, but the data is not appearing on MQTT, don't even see any data on MQTT broker.

\n
import identity\nimport json\nimport paho.mqtt.client as mqtt\nimport time\nimport datetime\nimport RPi.GPIO as GPIO\nimport bme280\nimport picamera\nimport base64\n\nDEVICE_ID = identity.local_hostname()\n\nconfig = json.loads(open('config.json').read())\nSERVER = config['mqtt1']['hostname']\nPORT = int(config['mqtt1']['port'])\nUSER = config['mqtt1']['username']\nPASSWORD = config['mqtt1']['password']\n\ndef on_connect(client, userdata, flags, rc):\n    if rc == 0:\n        print("Connected to broker")\n    else:\n        print("Connection failed")\n\ndef on_subscribe(client, userdata, mid, granted_qos):\n    print("Subscribed: " + str(message.topic) + " " + str(mid) + " " + str(granted_qos))\n\n\ndef on_message(client, userdata, message):\n    print("message topic=",message.topic)\n    print("message qos=",message.qos)\n    print("message retain flag=",message.retain)\n\nclient = mqtt.Client(DEVICE_ID)                 \ncamera = picamera.PiCamera()\ncamera.resolution = (1280, 720)\nclient.username_pw_set(USER, password=PASSWORD)   \nclient.on_connect= on_connect                     \nclient.on_subscribe= on_subscribe                  \nclient.on_message= on_message                     \nclient.connect(SERVER, port=PORT)            \nclient.loop_start()\ntopics = [("DOWN/site01/pod02",2)]                 \nclient.subscribe(topics)\ndata = {'device_id':DEVICE_ID}\nPUBLISH_DATA = "UP/" + "site01/" + DEVICE_ID + "/data"\nPUBLISH_IMAGE = "UP/" + "site01/" + DEVICE_ID + "/image"\n\ntry:\n    while True:\n        data['date'] = str(datetime.datetime.today().isoformat())\n        data['temperature'],data['pressure'],data['humidity'] = bme280.readBME280All()\n        data['switch1'] = GPIO.input(14)\n        data['switch2'] = GPIO.input(15)\n        client.publish(PUBLISH_DATA,json.dumps(data))\n\n        camera.capture('image.jpg')\n        image_data = open("image.jpg", "rb")\n        image_string = image_data.read()\n        imageByteArray = bytes(image_string)\n        client.publish(PUBLISH_IMAGE, imageByteArray, 0)\n        time.sleep(10)\nexcept KeyboardInterrupt:\n    camera.close()\n    client.disconnect()\n    client.loop_stop()\n    GPIO.cleanup()\n
\n

Anyone know what I am missing here?

\n", "Title": "Publish image data to MQTT not showing", "Tags": "|mqtt|raspberry-pi|paho|", "Answer": "

This is because the message payload is way bigger than will fit into a single TCP packet so the problem is that you are not starting the Paho client network loop which will chunk the message up and stream it out to the network.

\n

You have 2 choice.

\n

First start the network loop in the script. This is best if you are planning on sending multiple images.

\n

Secondly if you are just publishing 1 image then the paho client code has a single function to handle just that.

\n

Examples of how to use both can be found in the following Stack Overflow Answer.

\n" }, { "Id": "3765", "CreationDate": "2019-01-07T07:31:29.190", "Body": "

As you can see from my question on h/w recommendations, I am trying to design an evacuation system for a chemical factory.

\n\n

That requires knowing which room each employee is in at any given time. I can handle the system to track the employees, but have been looking for a long time for a durable wearable with long battery life for each employee to wear or carry.

\n\n\n\n

The I had an epiphany when I looked at my wrist and saw the cheap fitness tracker on my wrist. It's a Xiaomi Mi Band 3 which I got free with my last 'phone.

\n\n

I am charging it about once every 3 weeks, although I currently do not turn BT. I will need to calibrate that, although reviews give it 7 days of heavy usage.

\n\n

So - finally - to the question: how can I detect transmissions from the device? If they are frequent enough (say, more than once per minute), then it doesn't matter what the signal is, so long as I can get a MAC address out of it and use that to locate the device.

\n", "Title": "How can I detect the presence of a Xiaomi Mi Band 3 (which uses BlueTooth)?", "Tags": "|bluetooth|", "Answer": "

The band is a BLE (Bluetooth low energy) device, as such it will broadcast beacon packets at regular intervals (the BLE spec can configure this interval).

\n\n

These beacons are how devices (e.g. phones) know that they are in range and can then connect to then to get more data, but the beacons can also contain a small amount of data (e.g. BLE temp sensors, physical web URL beacons or iBeacons). Beacon only devices can run for years on just a coin cell.

\n\n

As for a detector, any thing with a BLE capable adapter can be used. A raspberry pi zero w is a good start for a prototype, a simple BLE beacon can be written as a shell script, with the hcitool command line tool or any number of other languages (e.g. Node-RED has a BLE beacon listener node).

\n\n

Each device will have a MAC address, but be aware that cheap devices may not be unique (I once bought 20 USB BLE dongles and found 5 with the same MAC address)

\n" }, { "Id": "3768", "CreationDate": "2019-01-07T09:43:50.277", "Body": "

Everytime when i read about the \"Internet of Things\" it is suggested that devices are smart but when it comes to an implementation of an IoT ecosystem, it is not so clear to me anymore. So i need some help for explanation of the term \"Internet of things\" and \"Smart devices\"

\n\n

For me, there are two cases

\n\n
    \n
  1. Devices are very smart.\nExample: The classical smart fridge which orders milk if its empty. The fridge is able to perform a order by itself and does not need any help from intermediate logic between the device (fridge) and the webshop.

  2. \n
  3. Devices are dumb as much as possible. \nThe fridge does not know if the milk is empty. It just know that sensor A (\"milk detector\") is status \"empty\" or \"not empty\". Or just have a sensor B which counts the pieces of milkbags in the fridge and push this to a queue or a REST Service. This Service is an intermediate layer between other systems like webshops or other devices.

  4. \n
\n\n

Does \"smart device\" even mean that itself is able to evaluate its own data and therefore is able to perform actions without any external help?

\n\n

I can't imagine that a simple light bulb should be able to go to my webshop and order itself automatically ... or is this exactly that for what \"Internet of things\" is standing for?

\n\n

For 1. i can see the main drawback is that you have to reprogram the fridge if it should have a different behaviour (order new milk if count of milk < 2 instead of < 1). The upside is, that you don't need any additional layer between webshop and device.

\n\n

For 2. the main drawback is the additional logic layer between webshop and device. But it is easy to modify the smart behaviour in a central way on a common platform where all \"smart devices\" are registered.

\n", "Title": "Dumb sensor vs smart sensor", "Tags": "|smart-home|sensors|", "Answer": "

In the Internet of Things, you have Smart Objects and not-Smart Objects.

\n\n

The not-Smart Objects are sensor and actuators. Sensors allow you to obtain different measures from the environment: the light fluctuation using a photoresistor, the temperature using a thermistor, to detect flames, sounds, movements, or any other fluctuation in the environment. The actuators can do an action. Examples of mechanic actuators could be motors, servomotors or hydraulic bombs, and examples of actions could be to send a message, control LEDs, turn on lights or control the movement of a robot or any other available robot\u2019s actions.

\n\n

On the other hand, you have the Smart Objects. These one are composed of not-Smart Objects, maybe only sensor, maybe only actuators, or a combination of both. However, Smart Objects also have the capacity to think, because they have a processor. For instance, a smartphone, a microcontroller like an Arduino, etc.

\n\n

You can see this in one of the existing definitions: 'A Smart Object, also known as Intelligent Product, is a physical\nelement that can be identified throughout its life and interact with the environment and other objects. Moreover, it can act in an intelligent way and independently under certain conditions. Furthermore, Smart Objects have an embedded operating system and they usually can have actuators, sensors, or both. This allows Smart Objects to communicate with other objects, process environment data, and do events'.

\n\n

Besides, we can classify the Smart Objects according to the different level of intelligence that they can have. They have three levels of intelligence, three levels of the location of intelligence, and three levels of the aggregation of this one.

\n\n

Here you can see a better explanation with more details and the classification in: https://www.researchgate.net/publication/307638707_A_review_about_Smart_Objects_Sensors_and_Actuators

\n\n

Here you have an explanation that I did before about the Internet of Things: What's the difference between the Internet of Things and the traditional Internet?

\n" }, { "Id": "3770", "CreationDate": "2019-01-07T17:44:07.267", "Body": "

I have some sensor nodes (Bosch XDKs) that send information to an MQTT broker and an application reads the information and stores it into InfluxDB. Simultaneously, I have RFID readers that scan some tags and send the information to a MongoDB. Based on some logic I wish to merge the data from InfluxDB and MongoDB together and would like to send to a custom platform which would have its own Database where the clubbed information will be stored to be visualized.

\n\n

\"Basic

\n\n\n\n

Requirement

\n\n\n\n

What I a unable to grasp is whether to use webhooks or websockets for the Combine App and also how is it possible to know if MongoDB was updated with some new information?

\n\n

I read a good SE Query for Webhooks v/s Websockets and found that the scenario is more in the direction for server-to-server (webhooks) where the client interaces only to the platform via its internal services and not direclty via the two DBs but I am not sure if this is the case.

\n\n

I see this as a very relevant usage of IoT Real-Time data application and am looking for some clear architecture and implementation answers for the mentioned case above.

\n\n

Are there repositories that somehow provide help as to how to create webhooks/websockets that fulfill my requirements?

\n", "Title": "Pushing Processed data from RFID + Sensors to a Platform", "Tags": "|sensors|system-architecture|web-services|rfid|web-sockets|", "Answer": "

Congratulations on writing a research paper on it.

\n

This can be done with streaming platforms such as kafka or AWS KDA (kinesis data analytics). There are clauses in the query language in KDA (and also in KSQL, I think) which will allow you to link the events that happen very close together.

\n

https://aws.amazon.com/kinesis/data-analytics/\nhttps://docs.aws.amazon.com/kinesisanalytics/latest/dev/windowed-sql.html

\n

If all you're looking for is to get the last known sensor values when an RFID is read, you can do it very simply in just AWS IoT Rules. Let the sensors write to a dynamodb table (using an IoT rule) and let the RFID reader pick up values from the dynamodb. You'll just need a way to associate the two together. That can just be another dynamodb table that is first looked at in a rule that gets an rfid reading (along with reader id) and uses the reader id to look up the associated sensors. Then, a second rule executes that picks up the last known values from those sensors. The end result can then be used to write to a queue, etc.

\n" }, { "Id": "3780", "CreationDate": "2019-01-09T08:04:36.470", "Body": "

Not sure how much better I can phrase it in the question but i'll try to explain to the best I can.

\n

What I am trying to do:

\n
    \n
  1. Using IFTTT Webhook to POST a JSON message in MQTT Format.

    \n

    POST URL: test.mosquitto.org
    \nE.G:

    \n
  2. \n
\n
   {\n       "payload:": "kitchen",\n       "topic:": "device/state",\n       "retain:": true,\n       "qos": 2\n   }\n
\n
    \n
  1. Using a Node-RED mqtt input (subscribe) node with the settings configured to test.mosquitto.org port 1883 and topic set to device/state, I should be able to retrieve the payload I've published to the mosquitto broker to my Node-RED node.
  2. \n
\n

What went wrong
\nI think something might be wrong with the POST to the test.mosquitto.org broker.

\n

Troubleshooting
\nBy using mosquitto_pub and mosuqitto_sub commands, I'm able to to receive the payload in my Node-RED (which means that my Node-RED mqtt node is configured correctly).

\n

Commands:
\nTerminal 1: mosquitto_sub -h test.mosquitto.org -t device/state -d
\nTerminal 2: mosquitto_pub -h test.mosquitto.org -t device/state -m "kitchen"

\n

Node-RED successfully receives the message in JSON object format. But it doesn't receive anything when I attempt to publish through Postman using POST method to the URL.

\n", "Title": "POST messages in JSON to mosquitto MQTT not being received from node-RED mqtt node", "Tags": "|mqtt|mosquitto|web-services|", "Answer": "

MQTT brokers are not HTTP servers, you can not POST to broker, it just won't work.

\n\n

MQTT and HTTP are two totally different protocols, if you want to bridge them you will need to write program to do that. Doing so in Node-RED is trivial.

\n\n
HTTP-in set to receive POSTs --> MQTT-out to publish to broker\n                             |\n                             --> HTTP-response node (to close out the HTTP session)\n
\n\n

You can set the topic to publish to in the HTTP-in node, or insert a function node between HTTP-in node and the MQTT-out node to set topic and qos settings if you need to.

\n" }, { "Id": "3784", "CreationDate": "2019-01-10T13:17:03.427", "Body": "

I have a virtual switch set up through Hubitat that actually opens and closes my garage door. I can say \"Alexa open Pete\" and she will turn the virtual switch on which opens the garage door. Also I can say \"Alexa close Pete\" to close the garage. The problem I am having is if I ask her \"Alexa is Pete open?\" I want her to determine the status of the switch. I am even ok if she responds \"Pete is on\" instead of \"Pete is open\" but I can't get that far even.

\n\n

I have tried (based on searches of things to try)

\n\n\n\n

The problem seems like it should be pretty simple as it is really determining if a light / switch is on or off but I can't figure how for her to report the status / state of a device at all. Is that something she doesn't support?

\n\n

FYI if you didn't figure out, Pete is the name of my virtual switch for the garage door :)

\n", "Title": "Can Alexa determine a switch state?", "Tags": "|alexa|", "Answer": "

Alexa has updated their software to directly support garage openers. Now I can hook up the garage opener through Hubitat as a garage opener (no more having to trick things using virtual devices) and Alexa sees the garage opener as a garage opener (no more tricking).

\n

This allows her to directly open and close the garage opener and report back its state. I can now say "Alexa is garage opener open?" And she will reply correctly.

\n" }, { "Id": "3785", "CreationDate": "2019-01-10T15:41:09.897", "Body": "

I have a big doubt that I'm not able to solve.

\n\n

Basically, I want to understand how the smart device that I have in my home communicate with their servers, or better, how their servers can call its and sending commands (like turn on/off) without any type of port forwarding.

\n\n

I know that if I wont to access to a device remotely I need to expose it on the network through a port forwarding on my router, but, I didn't configured anything for these devices, so, what kind of method they use?

\n\n

Could someone let me know?

\n", "Title": "How Smart Home devices communication works", "Tags": "|smart-home|networking|communication|protocols|cloud-computing|", "Answer": "

The answer is that the device connects out to the control server and maintains this connection.

\n\n

The control server then uses this connection to forward requests.

\n\n

Typically messaging protocols like MQTT are used as they designed to be used over a long running connection initiated by the client.

\n\n

This approach also solves the problem that the controller would need to know where device was. Given most home internet connections use dynamic IP addresses (if not even worse CGNAT) this would mean that the device would need a way to update the control server every time the external IP address changed (which it would not easily be able to determine). Port forwarding also needs either the owner to be technically capable enough to set it up, or UPnP to be enabled (which is probably a bad idea in the current security landscape). Port forwarding also limits the number of devices that can be deployed behind a given router.

\n\n

Also see my answer to how smart plugs work: How do smart plugs of domotic IoT work?

\n" }, { "Id": "3788", "CreationDate": "2019-01-10T08:58:39.730", "Body": "

I am setting up a DIY home automation, and just planning how my solution will look. I will have a Raspberry Pi 3 as the \"hub\" of the network\nIt will run Node Red, MQTT (mosquitto), a DotNet website, database, and reverse proxy server, and possibly a few other things

\n\n\n\n

I plan to have somewhere in the vicinity of 30-50 devices on the network, most accessed via MQTT, some via HTTP.

\n\n

The question: Will running all of that on a single Raspberry Pi 3b \"overload\" the system? Am I better off splitting responsibility across 2 Pis (and if so, what is the best logical grouping)?

\n\n

Further, are there any issues with thrashing the SD card in the Pi(s), or should I attach an SSD/HDD?

\n", "Title": "Will my Raspberry Pi home automation hub be able to do all that?", "Tags": "|smart-home|raspberry-pi|", "Answer": "

A Pi 3b is a very capable system, a quad core 1.2GHz Arm CPU with 1GB of RAM.

\n\n

It should be more than capable for what you are planning, but with all these things it will depend on exactly what you intend to do.

\n\n

Node-RED is basically a programming environment, so it's not possible to say how much resource it will consume without knowing the program (flow) you are going to run on it. (But you can say it will never consume more than 1 core since it's a NodeJS app and as such single threaded)

\n\n

You will have to assemble your system and test it to see how it behaves.

\n\n

The good news is that you should be able to easily move the MQTT broker and reverse proxy to a separate pi simply if (in the very unlikely event) the load becomes too much.

\n" }, { "Id": "3811", "CreationDate": "2019-01-18T17:45:38.713", "Body": "

I'm trying make an IoT device which I want be a highly power saving one & want it to work in a remote static environment. So I plan to make a circuit with a \"SARA\" NB IoT module. Now I want to know if FOTA is possible with NB_IOT.

\n", "Title": "Are Over-the-Air Updates possible with NB-IoT", "Tags": "|over-the-air-updates|", "Answer": "

Yes, explicitly so, albeit it will be relatively slow.

\n\n

Carriers are actually recommending NB-IoT and Cat-M1 combo modules, and dynamically switching to M1 when FOTA is needed, although this will only make sense for some apps.

\n" }, { "Id": "3821", "CreationDate": "2019-01-20T19:24:44.853", "Body": "

I bought these at different times, but thought I was getting the same thing.

\n\n

Which one is the fake? And if neither is a fake, which one is newer and/or better?

\n\n

The first one uses a CR2477 battery, almost twice as thick as the CR2450:\n\"Sensor

\n\n

Both claim to be made by SmartThings, Inc, but the second one also includes a SmartSense trademark, and uses the thinner CR2450 battery:

\n\n

\"Sensor

\n", "Title": "Which one is the fake? Samsung SmartThings motion detectors", "Tags": "|samsung-smartthings|", "Answer": "

They don't claim to be made by SmartThings Inc. They just sell it.

\n\n

The second one has an older FCC ID T3L-SS014\n (2015) than the other one 2AF4S-STS-IRM-250\n (2016). Neither has been manufactured by Samsung or the SmartThings subsidiary. They come from CentraLite Systems, Inc and SAM JIN CO., LTD respectively. The former also used a factory or a partner in Mexico to produce it. Of course, that doesn't say anything about when they were actually produced.

\n\n

Also the upper one is missing the European CE logo, so it could just be a US only version. Furthermore, SmartThings was only acquired by Samsung in late 2014 so maybe the only started to go out with Samsung SmartThings with the later model.

\n\n

So the original SmartThings Inc before being bought probably had a partner in Mexico and Samsung shifted the production to a Korean partner/subsidiary \u2014 I'm not gonna investigate Korean Samsung corporate structures \u2014 after they bought it.

\n\n

Long story short, doesn't look fishy to me.

\n" }, { "Id": "3829", "CreationDate": "2019-01-24T08:21:57.740", "Body": "

I'm working on a project which is about designing an IDS for RPL Protocol. In this project I have to simulate different types of attacks (e.g. Black hole Attack) and RPL Protocol. I saw on the Internet that Contiki OS and Cooja simulator have not been upgraded for 4 years. Do you have an alternative you can suggest?\nAny Comments would be appreciated.

\n", "Title": "LLN and RPL Simulator?", "Tags": "|6lowpan|contiki|", "Answer": "

There is this 6tisch simulator that implements the whole protocol stack. It's written in Python. Might help.

\n\n

https://bitbucket.org/6tisch/simulator

\n\n

Cheers.

\n" }, { "Id": "3838", "CreationDate": "2019-01-26T23:26:58.987", "Body": "

I'm trying my first IoT project whereby I want to:

\n\n\n\n

I saw someone online demonstrate something similar in this YouTube video

\n\n

https://www.youtube.com/watch?v=rU_Pw9Jb_PM

\n\n

The author shared the project on github here

\n\n

https://github.com/hjltu/esp8266-wifi-microphone

\n\n

When I study the code, I think what I see is the author taking the value of analogRead(A) and appending it to some kind of string as a payload, which is then published to an MQTT server.

\n\n

I see that the author expects the MQTT server or some other software to process the ESP8266 microphone audio data and output it as a .RAW file. This RAW file is eventually converted to a .WAV file with the help of ffmpeg.

\n\n

My question is this: What command allows MQTT server to generate the .RAW file? Or is this done by an entirely different software? And it appears to me that for a single recording/audio file, the my_record() of esp8266-wifi-mic.ino file will send multiple payloads to the MQTT server. So how does the MQTT server know which published transmission belongs to which RAW file?

\n", "Title": "How does the MQTT server output a single raw file from multiple client publications?", "Tags": "|mqtt|esp8266|", "Answer": "

You are correct about the microphone input, void my_record() samples the microphone output level 1000 times, appending each reading to a string variable and publishes the resulting string to an MQTT broker.

\n

This process repeats 11 times every time my_record() is called.

\n

note: You are sort of misunderstanding about the .RAW file. It is a raw file, meaning that it is unprocessed and unformatted .... just a stream of bytes. Using the term .RAW implies a file name extension.

\n

What command allows MQTT server to generate the .RAW file?

\n

The MQTT broker (server) does not generate the raw file, the file is published to the MQTT broker by an outside source, the ESP8266 in this instance.

\n

So how does the MQTT server know which published transmission belongs to which RAW file?

\n

It does not know. All it does is to relay messages. It is up to the publisher to send to the correct topic and it is up to the subscriber to watch the data at the correct topic.

\n

The messages could arrive to the subscriber out of sequence, so a sequence number needs to be included with the message if a correct data sequence is desired.

\n

Have a look at these for a visual demo of MQTT messages.

\n

https://shiftr.io/try ... you can publish to this one (and subscribe)

\n

You can get your own account and watch your own messages being sent and received without the clutter of other messages.

\n" }, { "Id": "3844", "CreationDate": "2019-01-29T09:46:19.130", "Body": "

I have this idea of writing an app/something to make the gate, that we have in front of our house, smart. But I just don't know where to start. \nI assume that the gate uses IR because we have remotes for it, but I'm not sure. The gate also has a codepad and I want everything to work simultaneously. Meaning that when I'm done with my project the remotes and code still work.

\n\n

So, can anyone help me to get started with this project?

\n\n\n", "Title": "Smart gate - where to start?", "Tags": "|smart-home|raspberry-pi|arduino|", "Answer": "

As mentioned in the comments...

\n\n

Assuming the controller has a button that will trigger the gate to open then you just need to find a way to \"press\" that button.

\n\n

The simplest way is probably to attach a relay across the 2 wires to the button. The relay can be triggered by something like a esp8266 or the GPIO pins on a raspberry pi.

\n\n

The question is then if it's a momentary press or if it needs to be held while the gate opens.

\n\n

If it's momentary then the relay just needs activating for a second, if it's press and hold then you will need to activate the relay for the length of time it takes for the gate to open.

\n\n

That is a very simple approach and doesn't cover things like feedback on if the gate is open/closed.

\n" }, { "Id": "3845", "CreationDate": "2019-01-29T09:51:37.080", "Body": "

I'd like to use my Raspberry Pi to emulate phone Bluetooth stack. I'd like to get it paired with car radio, simulate arriving/outgoing call, simulate SMS and the more alerts/notifications I can manage.

\n\n

Below are some solutions I deepen, useful but not exactly what I need (thanks also to https://stackoverflow.com/questions/28076898/emulate-a-bluetooth-device-from-pc and https://android.stackexchange.com/questions/4538/can-i-emulate-a-bluetooth-keyboard-with-my-android-device):

\n\n\n\n

May someone suggests which is the best library or framework to work on, to accomplish this aim? Someone already played with a similar project? Any useful info?

\n\n

Thank you

\n", "Title": "Raspberry Pi emulate phone Bluetooth stack", "Tags": "|raspberry-pi|bluetooth|", "Answer": "

I find helpful this project too: https://github.com/nccgroup/nOBEX

\n\n
\n

nOBEX allows emulating the PBAP, MAP, and HFP profiles to test vehicle infotainment systems and similar devices using these profiles. nOBEX provides PBAP and MAP clients to clone the genuine virtual filesystems for these profiles from a real phones. This means downloading the entire phone book and all text messages. Raw vcards, XML listings, and MAP BMSG structures are stored, and can be modified as desired for negative testing. nOBEX can then act as a PBAP and MAP server, allowing vehicles and other devices to connect to it and retrieve phone book and message information. Vcards, BMSGs, and XML listings are sent exactly as saved, allowing malformed user modified data to go through. Since most vehicle head units require HFP support before they attempt using PBAP and MAP, nOBEX also provides rudimentary support for HFP. It will send back user customizable preset responses to AT commands coming from the vehicle's head unit. This allows mimicking a real cell phone.

\n
\n" }, { "Id": "3856", "CreationDate": "2019-02-04T16:00:24.650", "Body": "

We are in the planning phase for a telemetry iot device with only occasional access to the internet.

\n\n

I found a lot of information online on how to store iot data in the cloud, what databases to use, how to calculate space requirements etc. what I'm missing is:

\n\n

How do I store the data locally on the client before sending it to the cloud?

\n\n\n", "Title": "How do I store data on iot device with only occasional access to the internet?", "Tags": "|product-design|", "Answer": "

Since as per your spec this would be a linux based system, you could store the data locally on Sqlite database.

\n\n

Further check out SQLITE-SYNC. With this framework supposedly your application can work completely offline, then perform an automated Bidirectional Synchronization when an internet connection becomes available.

\n\n

So your app does not need to maintain routines to sync forward and/ back.

\n\n

Ref: https://ampliapps.com/sqlite-sync/

\n" }, { "Id": "3869", "CreationDate": "2019-02-10T16:14:21.177", "Body": "

If I buy a google home in US and give it to a friend in Ethiopia, will it work if he speaks English to it or will the Ethiopian IP address cause an issue with the functionality of the google home device?

\n", "Title": "Will google home mini work in foreign country (not supported)?", "Tags": "|google-home|", "Answer": "

I had a colleague bring me a Google Home from the US on release day to the UK and it worked just fine before they were released in the UK.

\n\n

The only problem may be that some Google Assistant Actions will not work as they will be limited to certain geographical regions. Also there may be limitations based on where the Google id used with it is registered.

\n" }, { "Id": "3872", "CreationDate": "2019-02-11T03:06:29.090", "Body": "

I have two different sensors connected to my Raspberry pi and I have to send the data to other device. The transmission of data is through wireless network. I have been asked to send the data in synchronization (i.e. data collected together from sensors must reach/ process at destination together).

\n\n

I have no prior experience in field of Networking or IoT (I'm currently reading Data Communications and networking by Forouzan for the same). I have searched Google but no relevant website comes up.

\n\n

*Using LoRa for transmission.

\n", "Title": "How to send data collected by two different sensors simultaneously?", "Tags": "|raspberry-pi|lora|lorawan|data-transfer|", "Answer": "

This is a programming task. In your code, you can probably read values of the sensors separately. Now you have to write 2 threads(2 equals number of different sensors) that wait for data to be read, and put a mutex that holds the program until both data is read. When both data is read, unlock the mutex, combine your data variables in a JSON(or any other) and send them once.

\n" }, { "Id": "3880", "CreationDate": "2019-02-11T20:20:20.140", "Body": "

I'm new to MQTT so I'll try to explain my situation first and then ask some questions about MQTT.

\nBasically what I'm trying to do is set up some sensors in two houses, A and B, and be able to manage those sensors from my home, C, people from both A and B won't care if the sensors are working or not, they won't be doing any managing or checking so basically I need to be able to control and see the sensors' statuses from home. All the sensors have MQTT support and can send data using MQTT.

\nAlright so that's what I'm trying to do, I've been reading a bit about MQTT and watching some videos but I've noticed that in most examples everything is connected to the same router so I was wondering that maybe I can't do this with MQTT.
So my questions are:

\n\n\n\n

Also I'd appreciate if you could share some links on literature about learning how to use MQTT (preferably using a Raspberry Pi since that's what I wish to use) and do home automation since this is a topic I'm really interested in.

\nThanks to everyone who responds!

\n", "Title": "Quick questions about MQTT of a beginner", "Tags": "|mqtt|", "Answer": "
    \n
  1. MQTT over the internet is perfectly possible (It's how AWS IoT, IBM IoT, Microsoft IoT offerings all work). You should probably use MQTT with TLS to ensure it's secure.
  2. \n
  3. You run a broker in the cloud and have devices (or other brokers bridge to it). Because a MQTT client connects out to a broker this works really well for devices that are behind routers using NAT.
  4. \n
  5. You don't need a broker at each house/location but it is a valid way to deploy things and use broker bridging to connect each broker to a broker in the cloud. This arrangement can allow things to continue to work when the connection to the outside world fails.
  6. \n
  7. Yes a single (load balanced cluster of) brokers in the cloud is a perfectly reasonable solution (see answer 1)
  8. \n
\n\n

Sharing a collection of links is probably off topic, but http://mqtt.org is a good starting point.

\n" }, { "Id": "3883", "CreationDate": "2019-02-12T07:36:32.887", "Body": "

I'm an American who lives in Japan. I can speak Japanese reasonably well, but I normally keep my Google Home speakers and Google Assistant app set to U.S. English for ease of use. Now I'm thinking of buying Google-compatible smart ceiling lights for my livingroom from a Japanese manufacturer. (In Japanese homes, most rooms have a universal attachment in the ceiling that accepts any style of ceiling light - the U.S. needs to learn that lesson!) But of course the example voice commands are all in Japanese - I'm sure they are assuming all their users have their smart speakers set to Japanese. I asked a light manufacturer about it, but they merely answered that their lights are only for use in Japan and therefore they have not tested them with English and cannot advise me.

\n\n

So my question (thinking like a programmer) is: When a voice command is given to Google Home to control another device (e.g. \"Dim the lights in the living room\"), I know Google Assistant will first parse the sentence to determine which device(s), e.g. the room called \"livingroom\" and device type called \"lights\". But then, does it send the rest (in this case, the word \"dim\" in English) to the device as-is, or is there an underlying code that is not English or Japanese at all but part of a common command set for smart lights? I know that I might have to experiment to figure out what English syntax works with any given model (particularly fancier stuff like changing the color temperature or setting the brightness to a specific level), but I just want to know if it's possible.

\n\n

This is actually the second IoT device I've bought in Japan, but the first was only a cheap plug (so the only commands are on and off) and all its communication was through Smart Life (almost certainly a server in China, and it supports multiple languages), so that's not a very good test.

\n", "Title": "Are underlying commands language-agnostic between smart speakers and IoT devices?", "Tags": "|google-assistant|smart-lights|", "Answer": "

The commands e.g. on/off, set brightness are all handled in the language the device is set to. They are then converted to a set of enum values that are passed to the backend for control as part of a JSON object. The Google Home Smart Device API is here for a list of available commands and what gets passed: https://developers.google.com/actions/smarthome/

\n\n

As for device/room names they are matched against the language they are entered in. So if you name a room Living Room in English and then try to access it with the Japanese for Living Room \"\u30ea\u30d3\u30f3\u30b0\u30eb\u30fc\u30e0\" I don't think that will work. Once a device is matched by name it is then converted to a unique id provided by hardware manufacturer.

\n" }, { "Id": "3891", "CreationDate": "2019-02-13T18:43:23.027", "Body": "

Amongst the plethora of MQTT questions, I am wondering what are some alternatives to MQTT for when all messages sent to a topic need to be kept, and in a queue for a new subscriber.

\n\n

At my company, we have remote deployments that we manage, and we are wanting to use MQTT for local data collection. The idea would be that data would be sent to the local broker onsite (running on a Raspberry Pi, for example), and the broker would have an MQTT bridge with our CloudMQTT deployment. If connectivity would be lost, the messages would collect locally, and synchronize again when connectivity was re-established.

\n\n

The set up is typical, like this:

\n\n

\"Simple

\n\n

For my example, on the left side would be many (around ~100) MQTT local brokers running at each location, and on the right would be the CloudMQTT server we pay for.

\n\n

When I read the article MQTT Essentials Part 8: Retained Messages, this part was disappointing:

\n\n
\n

A retained message is a normal MQTT message with the retained flag set\n to true. The broker stores the last retained message and the\n corresponding QoS for that topic. Each client that subscribes to a\n topic pattern that matches the topic of the retained message receives\n the retained message immediately after they subscribe. The broker\n stores only one retained message per topic.

\n
\n\n

Essentially what this means is that there would have to constantly be a subscriber on the CloudMQTT server listening for all incoming events from all of our locations; otherwise, data might be lost.

\n\n

MQTT seems built to only keep the most recent message; are there any other software packages that can do this local <=> remote syncing, but keep all messages?

\n", "Title": "Alternatives to MQTT for local / remote bridging", "Tags": "|mqtt|", "Answer": "

Another option is to create your own service to handle it. The service needs to be connected to your local MQTT broker on local RPi (acts as local gateway) as you have specified. The local gateway generally used to provide data filtering, but you can use it for temporary storage of data that your device sends. You can use python or any programming language that support MQTT client library. The client service is nothing but another MQTT client running on gateway and can perform logic and database operations. This client service, that you build, receives the data as soon as it arrives on MQTT broker at local gateway, try to forward it to CloudMQTT. If it couldn't connect to CloudMQTT due to no Internet connectivity available, it stores the data to temporary database with flag to keep track whether that data is published to CloudMQTT or not. The client service (always running in background) try to fetch this data from database and resend it. If that data is successfully sent to CloudMQTT, the service updates the flag as the data is sent. You can later on delete all data from database whose flag value indicates that it is sent to CloudMQTT. So in this case you local RPi machine which runs a local broker (of course with database like SQLite) can actually acts as an intelligent gateway. This solution should give you ability to send all collected data even if there is no Internet connectivity for hours and even for a day or more because you have all the backup.

\n" }, { "Id": "3900", "CreationDate": "2019-02-15T21:46:08.863", "Body": "

I have some Amazon Dash buttons that I want to use for multiple different scenarios, I don't know quite what yet, but I am unable to find a good way to make the indicator light turn green after I have pushed it.

\n\n

I am aware that Amazon has a more expensive version of the button that can do this, I am going to buy one to play with, but I want to get this working if possible.

\n\n

I found this conversation and I looked at the links that are referenced and just like the comment says it is just not a very good way to do things.

\n\n

I have some c# running as a service on my computer and all it is doing, for now, is looking for MAC address then logging when that happened.

\n\n

I don't know much about web request stuff, but I am willing to really dig into it, as long as I don't have to do something like put custom firmware on my router.

\n\n

I am using PacketDotNet and SharpPcap in my c# solution.

\n\n

How do I return the 200 response, or anything else, to the Amazon Dash button to make the indicator turn green?

\n\n

Any help is greatly appreciated.

\n", "Title": "Dash Button indicator Light", "Tags": "|aws-iot|aws|amazon-iot-button|", "Answer": "

I'll start by saying I've not looked at one of these in person yet,

\n\n

But intercepting the call to the Amazon backend and responding to trigger the light is going to be tricky.

\n\n

I fully expect these devices to be hitting HTTPS endpoints (I really hope they are) in AWS which means that even if you set up a firewall rule to redirect the request to something local it would need to respond with a matching TLS certificate.

\n\n

You'll need to use something like wireshark to grab a button push to double check

\n" }, { "Id": "3937", "CreationDate": "2019-02-27T18:00:30.060", "Body": "

I am just a high school student who is trying to self-learn for my IoT home automation projects.Please excuse me if this question sound silly to you.This time I am working on this \"clap to switch on the light\" project.\nI learnt that to detect the sound from the clap, I can use a sound sensor.In my case, the sensor will be connected to an Arduino UNO which will then be connected to a relay module that will control the lights.My question is: Will the light switch-on because of a loud sound other than a clap, because obviously it would be really annoying for the light to switch on and off when I don't want it to.

\n", "Title": "For my \"switch on light with clap project \" will sound sensors detect other sounds other than a clap and switch on the light?", "Tags": "|sensors|arduino|sound|", "Answer": "

Of course it will detect other sounds in the sense of noticing other sounds \u2014 that's what a sound sensor is for. Getting the trigger exact enough will probably be the most challenging part of the project \u2014 unless there is a library that detects the sound profile of claps.

\n\n

That shouldn't deter you though. It's just about refining the detection bit by bit. Just know that you'll have misfires in the detection in the beginning. A lot.

\n\n

The first Amazon Echos were quite bad at properly detecting the wake words. Nowadays they can detect it a few rooms away, while the tap is running, and the TV, and the Police drives by with sirens.

\n" }, { "Id": "3939", "CreationDate": "2019-02-27T19:57:56.617", "Body": "

In the LoRaWAN 1.1. standard it says on Page 16 \"For Join-Accept frame, the MIC field is encrypted with the payload and is not a separate field\".

\n\n

In other scenarios the message integrity code is a separate field, so the message is encrypted, before the MIC is generated, if I understood this correctly. It is considered to be more secure to use Encrypt-then-MAC.

\n\n

Do you have an idea why MAC-then-encrypt is used for this message type?

\n", "Title": "Why does LoRaWAN 1.1 use MAC-then-encrypt for Join-Accept messages?", "Tags": "|security|lorawan|standards|cryptography|", "Answer": "

In case of all LoRaWAN messages, except JoinAccept, the MIC must be accessible by interim network components, like forwarding network server who know the NwkSKey and can verify message integrity and drop the message if it fails.

\n

However, interim network elements should have no information about the integrity of a JoinAccept message. This is to increasing the level of security.

\n

If someone knew the MIC of a JoinRequest, then he could guess an AppKey/JoinNonce and check if they are correct programmatically. If that guy invests in a large computer farm and operates that network for a long time, he may find out what the AppKey is. It is not possible if MIC is not accessible.

\n

You may say that that bad guy can do the same for regular uplink messages. That is true, but please note that UL messages are signed and encripted by session keys and not by the master key.

\n" }, { "Id": "3941", "CreationDate": "2019-02-28T06:39:42.953", "Body": "

Basically I want to make my Google home device make an announcement when my mail arrives via Domoticz sensor.

\n\n

I know IFTTT can trigger Domoticz via web URL but is there a function to trigger Google Assistant in a similar fashion?

\n", "Title": "Is it possible to trigger Google Assistant to say things via 3rd party?", "Tags": "|google-home|google-assistant|domoticz|", "Answer": "

You can send commands to a Google Home device on the same network to play a MP3 from a URL. With this you can have the Home Device play arbitrary messages.

\n\n

There are libraries to do this e.g. for nodejs google-home-notify that takes a string, sends it to Google's Text to Speech API and then has the Google Home play the output.

\n\n

The example code for this node is very simple:

\n\n
var googlehomenotifier = require('../')(\"192.168.178.131\", \"en-US\", 1);\n\ngooglehomenotifier.notify(\"Some crazy textmessage\", function (result) {\n  console.log(result);\n})\n
\n\n

Where 192.168.178.131 is the IP address of the Google Home device on your local network.

\n\n

There is also a Python version that might plug into Domoticz easier.

\n" }, { "Id": "3943", "CreationDate": "2019-02-28T15:41:06.293", "Body": "

I am working on a project that largely revolves around the ESP32 WROVER. The device is going to be at a remote location for testing, but I'll need to be able to get the serial output.\nI think that the OpenLog might be a good solution here, but I'm not 100% positive. I've been reading this, and have come away with more questions (mostly because it is rather Arduino specific:) https://learn.sparkfun.com/tutorials/openlog-hookup-guide

\n\n

My board has 4 pins broken out from the ESP32: rx, tx, vcc, gnd. Will those be sufficient to write to the OpenLog? Or do I need to figure out how to get a wire for dtr and blk (gnd?)

\n\n

Do I need to somehow include the OpenLog library in my source for the ESP32 program? It is written using PlatformIO using C++, not Arduino. I am hoping that using a config.txt file I can just have it write all serial input it receives to a single file. Is that the case?

\n\n

Thanks for any guidance here!

\n", "Title": "Can I log serial output from ESP32 using Sparkfun's OpenLog?", "Tags": "|esp32|", "Answer": "

Just for reference, it worked fine almost out of the box. I only needed to connect rx, tx, vcc, gnd. Then I setup the config file as outlined and plugged it in. Every time I run the ESP32, all of the output is saved to a new file. Couldn't have been simpler.

\n" }, { "Id": "3946", "CreationDate": "2019-02-28T21:06:12.100", "Body": "

TLDR:
\nI want to query Alexa for the current humidity sensor reading by asking:

\n\n
\n

\"What is the current humidity, Alexa?\"

\n
\n\n

The answer should be similar to:

\n\n
\n

\"The current humidity level is 56.7%\"

\n
\n\n

Here is my setup:

\n\n\n\n

What I am looking for:

\n\n\n\n

Is there any way to achieve what I intend to do, e.g. by using an already existing skill or maybe a NodeRed extension?

\n", "Title": "Custom sensor readings with Amazon Alexa", "Tags": "|smart-home|mqtt|alexa|sensors|node-red|", "Answer": "

Finally, I was able to achieve my target to be able to query Alexa for custom sensor readings without the need to program my own skill.

\n\n
\n\n

The following elements are used in my setup:

\n\n
    \n
  1. A Raspberry Pi in my home LAN with Node-RED installed on it
  2. \n
  3. Some source for a sensor signal (in my case a Nodemcu with a DHT22 sensor that sends the humidity readings via MQTT to the Raspberry Pi, where the MQTT broker is running)
  4. \n
  5. An Amazon Echo (which does not need to be in the same LAN as the Raspberry Pi!)
  6. \n
  7. The Node-RED Alexa Home Skill Bridge node by @hardillb
  8. \n
  9. The Alexa-remote-control shell script that lets you issue any text-to-speech command to your Alexa devices
  10. \n
\n\n
\n\n

Here are the steps that one needs to take:

\n\n
    \n
  1. Register a new device in @hardillb\u2019s Node-RED Alexa Home Skill Bridge. Any device and name combination should do. I chose a smart plug and called it \u201cHumidity_at_home\u201d.
  2. \n
  3. Now let Alexa search for new devices.
  4. \n
  5. Create an Alexa routine in the Alexa app, where you use a custom voice trigger (in my case: \u201cAlexa, what is the current humidity level\u201d) to switch the virtual device \u201cHumidity_at_home\u201d on.
  6. \n
  7. In Node-RED configure a Alexa Home Skill Bridge node for the device \u201cHumidity_at_home\u201d. Depending on the command from Alexa (\u201cHumidity_at_home\u201d plug on/off) the msg.command element of the node output will have the value TurnOnRequest/TurnOffRequest.
  8. \n
  9. In node-red, when the \u201cHumidity_at_home\u201d node is triggered and outputs msg.command = \"TurnOnRequest\", call the Alexa-remote-control shell script via the exec node issuing a text-to-speech command to an Echo device, e.g. using this command:

    \n\n
    alexa_remote_control.sh -d \"Your Echo's name\" -e speak:'Here is the text string you construct as answer to your Alexa request for the humidity level value'\n
  10. \n
\n\n
\n\n

Needless to say that you can use any kind of virtual device and any kind of setting of the device to trigger actions in node-red.

\n" }, { "Id": "3955", "CreationDate": "2019-03-04T19:40:12.417", "Body": "

I upgraded my v1 SmartThings Hub to the v2 SmartThings Hub. According to some online forums, I made sure to properly reset and remove all of my devices from my previous hub. Next, I installed my new hub (v2) and removed my old hub (v1) via the online portal (Groovy IDE or something...). Finally I began adding my devices, most were from the previous hub, a few were brand new that I bought with the v2 hub.

\n\n

None of the \"Samsung SmartThings\" devices connected. I've realized that all of the devices that have successfully connected to the v2 hub are Z-Wave, so I can assume that there is an issue with the Zigbee devices or the Zigbee radio in the v2 hub. Since half of them connected on the old hub, I'm assuming it's with the hub's radio.

\n", "Title": "SmartThings Zigbee Devices won't connect", "Tags": "|zigbee|samsung-smartthings|", "Answer": "

After close to three months of communicating with Samsung support.... I finally received some steps to solve my issues, but I'll step through what they had me do.

\n\n

Reset Your Hub

\n\n
    \n
  1. Unplug your hub from DC power
  2. \n
  3. Remove all backup batteries from your hub
  4. \n
  5. Wait about 15-30 mins for all capacitance to dissipate
  6. \n
  7. Plug your hub back into DC power
  8. \n
  9. Wait for the LED light to turn solid green
  10. \n
  11. Try to add your device(s)
  12. \n
\n\n

Reset Your Device

\n\n

There are a number of guides that Samsung provides for each device on how to properly reset your device. The latest SmartThings app provides a link that takes you directly to the appropriate page when it appears to be taking too long to connect.

\n\n

For the most part, it consists of either pressing and holding an available connection button for 5 seconds until the LED begins blinking green or blue. An alternative, typically for older models, is to remove all power from the device (battery, plug, etc.) press and hold the connection button while re-introducing power to the device. Continue holding the connection button until the LED begins flashing red or yellow at which point you should release the button.

\n\n

Verify Your Radio

\n\n
    \n
  1. Log into your SmartThings Groovy IDE using either your SmartThings or Samsung account (whatever you log into your app with).
  2. \n
  3. Go to My Hubs
  4. \n
  5. Look at the row for ZigBee.

    \n\n

    a. Verify the State: Functional

    \n\n

    b. Verify the OTA: enabled for all devices

  6. \n
\n\n

For me, OTA was disabled, if this is the case scroll down to Utilities and click on View Utilities and follow the next few steps:

\n\n
    \n
  1. Look at the row for ZigBee Utilities
  2. \n
  3. If you have no utilities, perform the Soft Reset on Your Hub
  4. \n
  5. Click on the Allow OTA tool
  6. \n
  7. Go back to My Hubs
  8. \n
  9. Verify the OTA setting is now set to enabled for all devices*
  10. \n
\n\n

Soft Reset on Your Hub

\n\n

When verifying the configuration of my ZigBee radio, I did not have any utility options for my device. So, I was instructed to perform what they called a Soft Reset of my hub.

\n\n
    \n
  1. Unplug all power (DC and battery) from the hub
  2. \n
  3. Using a small tool or paperclip, press and hold the reset button in the back
  4. \n
  5. While continuing to hold the reset button, plug the DC power back into the hub.
  6. \n
  7. When the LED begins flashing yellow, release the reset button. For me, the light almost immediately began flashing as soon as I plugged in the DC power. Also, the flashing wasn't a normal flash, it looked like a glitchy \"my battery is about to die kind of flash\". For those that remember how the LED indicator light would flash on the Gameboy, that's what it's like.
  8. \n
  9. The hub will go through a series of updates, transitioning the LED light between blue, magenta (or pink), and off.
  10. \n
  11. When the hub is ready, it will return to a solid green.
  12. \n
  13. After the hub is ready, return to the online portal and run through the steps for verifying and setting the OTA configuration to enabled for all devices.
  14. \n
\n" }, { "Id": "3966", "CreationDate": "2019-03-08T11:15:43.230", "Body": "

One of the supposed advantages of Zigbee application profiles such as Home Automation is that devices from different manufacturers can work together, so in theory you can use one generic hub (e.g. SmartThings) instead of several proprietry ones.

\n\n

But what happens when a device wants to check for updated firmware over the internet?\nDoes Zigbee or the application profile have some kind of standard for this?

\n", "Title": "Can a Zigbee device receive an OTA update through a different manufacturers hub?", "Tags": "|zigbee|", "Answer": "

Yes, the ZigBee standard defines the protocol for the OTA transfer, and also the format of the OTA files. So long as manufacturers implement the standard, then it is possible to use any hub to load new firmware into any device.

\n\n

You can find these definitions in the ZigBee Cluster Library documentation.

\n" }, { "Id": "3987", "CreationDate": "2019-03-16T21:09:24.707", "Body": "

I though of a bluetooth strip device, that would tell me the value of the angle and direction in 3D. I tried to search for one, but did not found anything. \nThe only idea I've got so far is to set up a few gyroscopes in a straight line and constantly measure their position and thus calculate the bending value. Is there a device, that can do that?

\n", "Title": "Measure strip bending", "Tags": "|bluetooth|", "Answer": "

This sounds like you need to use strain gauges and build your own.

\n\n

While bluetooth enabled strain gauges do exist, they are used as power meters for bicycles by measuring the amount of bend in the cranks. But the cranks only bend imperceptible amounts.

\n\n

The other possible option would be a piezoelectric material on the surface of the piece, but again you'd have to build your own circuit to interpret the voltage change and calibrate it.

\n\n

There isn't enough information about the setting to say if a accelerometer/gyroscope approach would work. We don't know if the entire artefact is fixed (e.g. bolted down) or can move without bending the strip.

\n" }, { "Id": "3988", "CreationDate": "2019-03-17T15:14:53.113", "Body": "

I would like to design a lock which will automatically open once someone is on a specific side of a door (\"outside\").

\n\n

There are easy (software) and complicated (hardware) elements to take into account but one of the points which i do not know to approach is the side detection.

\n\n

How is this typically done?

\n\n

An example is my car which will let the door close only if the key is outside the car. If it is next to the car outside, the car will lock. A few centimeters in (the key touches the door from the inside) and it is not possible to lock the door anymore.

\n", "Title": "How to differentiate the side of a door?", "Tags": "|wireless|door-sensor|smart-lock|", "Answer": "

Jsotola is right. The cars use the distance to know when they have to close doors.

\n\n

In your case, you can use an extra sensor in the inside part. Maybe, an RFID or NFC tag, an ultrasonic sensor, or a laser or infrared sensors. With this element, you can detect if people are in the inside part.

\n" }, { "Id": "4001", "CreationDate": "2019-03-22T10:08:22.507", "Body": "

On Page 9 in the LoRaWAN 1.1. specification it says:

\n
\n

The over-the-air octet order for all multi-octet fields is little endian.

\n

EUI are 8 bytes multi-octet fields and are transmitted as little endian.

\n
\n

How are multi-octet fields defined in the standard?

\n

Are these fields seen on the highest level of detail on page 16 (e.g. for payload-messages: MHDR, DevAddr, FCtrl, FCnt, FOpts, FPort, FRMPayload, MIC)?

\n

And what is meant by "over-the-air"?

\n

On page 26 the MIC calculation is described as calculating the CMAC of the message and some extra information and then only taking the first four bytes of it. Does this mean, the CMAC is seen as big endian first, then the MSB are "turned around" or is it more like the CMAC is already interpreted as little endian and the LSB is inserted and sent without changing it?

\n", "Title": "How does endianness work in LoRaWAN1.1?", "Tags": "|lora|lorawan|standards|", "Answer": "

I had a look into the source code of some projects and it looks like all multi-octet-fields that are encrypted are left Big Endian. The rest is little Endian and is even used as Little Endian during the calculation of e.g. the MIC or as a nonce in encryption. During MIC calculation, the internal 4 Byte counter is used as the FCnt value but still as little endian.

\n\n

I made a table for this:\n\"table

\n" }, { "Id": "4004", "CreationDate": "2019-03-23T01:10:45.620", "Body": "

Is there a GET request rate limit for the Thingspeak API? So far I've only been able to find information about POST requests.

\n\n

I have a data acquisition R script that I would like to run in parallel. I'm pausing ~1-second between each request currently, but when parallelized, I would be making 8 requests every 1-second instead.

\n\n

Thanks!

\n", "Title": "Thingspeak API Limit GET Requests?", "Tags": "|rest-api|", "Answer": "

A roundabout answer.

\n

As per info at bottom of this answer, you get charged by a number of messages written, not by number of times someone reads the data.

\n

That means you can experiment with multiple GET requests without having to worry about a cost, so run your script and see what happens.

\n

NOTE: make sure that your script does not write any messages

\n

Here is an excerpt from https://thingspeak.com/pages/license_faq

\n
\n

4. What is a message?

\n

ThingSpeak stores messages in channels. A message is defined as a write of up to 8 fields of data to a\nThingSpeak channel. For example, a channel representing a weather\nstation could include the following 8 fields of data: temperature,\nhumidity, barometric pressure, wind speed, wind direction, rainfall,\nbattery level, and light level. Each message cannot exceed 3000 bytes.\nExamples of messages include:

\n
    \n
  1. A write to a ThingSpeak channel using the REST API or target-specific ThingSpeak libraries
    \na. From an embedded device
    \nb. From another computer
  2. \n
  3. A write to a ThingSpeak channel using MQTT
  4. \n
  5. A write to a ThingSpeak channel from MATLAB using thingspeakwrite or the REST API
  6. \n
  7. A write to a ThingSpeak channel inside ThingSpeak using the MATLAB Analysis or MATLAB Visualizations Apps
  8. \n
  9. Any writes to ThingSpeak triggered by a React or a Timecontrol
  10. \n
\n
\n

and

\n
\n

19. Does using any of the apps in ThingSpeak\u2122 affect my messages in any way?

\n

Your messages are consumed when you write data to a ThingSpeak\nchannel. If you write data to a channel from one of the ThingSpeak\nApps, you will consume messages. For example, if are using the MATLAB\nAnalysis app to compute a value that is derived from data you have\nstored in ThingSpeak channels, you will not consume messages, but if\nyou save/write that value to another channel, you will consume\nmessages.

\n
\n" }, { "Id": "4010", "CreationDate": "2019-03-25T09:53:02.653", "Body": "

I currently have a Mosquitto MQTT Broker on which some IoT Nodes publish their information on a specific topic. I have an instance of Telegraf from InfluxData running that subscribes to this topic and stores the information into InfluxDB.

\n\n

Requirements:

\n\n

I understand that the information once published on a topic will be sent out immediately to the subscriber from the Broker. I am looking for a retention or caching mechanism in MQTT Broker which can wait till there are X number of datapoints on the topic that are published and then sent to the Subscriber.

\n\n

Is there such a mechanism that exists in standard MQTT Brokers or does this go beyond the MQTT Standard?

\n", "Title": "Are MQTT Brokers able to retain/cache some data for a certain amount of time and then send to the subscribers?", "Tags": "|mqtt|publish-subscriber|", "Answer": "

No, this is not possible with a MQTT broker.

\n

There are 2 situations where a broker will cache a message on a given topic.

\n
    \n
  1. When a message is published with the retained bit set. This message will be stored and delivered to any client that subscribed to a matching topic before any new messages. This is a single message per topic and a new message with the retained bit will replace any current message.

    \n
  2. \n
  3. If a client has a persistent subscription and is offline, the broker will queue all messages sent while it is offline to deliver when it reconnects (unless it sets the cleanSession bit to true in it's connection packet)

    \n
  4. \n
\n

The only way to achieve what you describe is to have another client batch up the messages and publish the collection on a different topic.

\n

EDIT:

\n

The MQTTv5 spec supports setting a Message Expiry Interval value on a message.

\n
\n

If present, the Four Byte value is the lifetime of the Application\nMessage in seconds. If the Message Expiry Interval has passed and the\nServer has not managed to start onward delivery to a matching\nsubscriber, then it MUST delete the copy of the message for that\nsubscriber [MQTT-3.3.2-5].

\n

If absent, the Application Message does not expire.

\n

The PUBLISH packet sent to a Client by the Server MUST contain a\nMessage Expiry Interval set to the received value minus the time that\nthe Application Message has been waiting in the Server [MQTT-3.3.2-6].\nRefer to section 4.1 for details and limitations of stored state.

\n
\n" }, { "Id": "4016", "CreationDate": "2019-03-26T17:54:58.957", "Body": "

I have two Raspberry Pis with a LoRa module each (Microchip RN2483, connected over serial). How do I tell MQTT (in Python) to use the LoRa motes (/dev/ttyAMC0) instead of Ethernet or Wi-Fi?

\n", "Title": "Communicate with MQTT over LoRa", "Tags": "|mqtt|raspberry-pi|lora|", "Answer": "

The short answer to this is you don't with any of the standard libraries (especially not Paho or the old Mosquitto Python wrapper).

\n\n

While MQTT doesn't require TCP it is best suited to being implemented on top of it and trying to use it over a serial port routed to LoRA radio will not be simple. It will require removing all the socket level code and replacing it with LoRA specific code and a LoRA addressing scheme to identify clients and the broker.

\n\n

I suggest you look at the following 2 things that may suit your needs.

\n\n

Firstly, look at the MQTT-SN spec, this is an even lighter weight protocol that is better suited to serial like communication.

\n\n

Secondly, what might be easier is to look at how the Things Network works. This uses MQTT to pass messages to the correct LoRA gateway when then delivers the message to the correct client.

\n" }, { "Id": "4030", "CreationDate": "2019-04-04T07:53:50.873", "Body": "

I bought a smart plug, which doesn't seem to support measuring electricity usage with its native app.

\n\n

I am wondering what the chance is, that this is a pure software feature and if I would be able to measure electricity usage by simply using a different app?

\n\n

If yes, what app could I try?

\n", "Title": "Measuring electricity usage with TECKIN smart plug", "Tags": "|smart-plugs|", "Answer": "

According to this Amazon page, the Teckin SP20 supports it.

\n\n

Also according to that Amazon page, the item does not have a UL listing. They claim to be applying for an ETL listing, which would be equivalent. In the meantime

\n\n
\n

It is total safe,you do not have to worry at all.

\n
\n" }, { "Id": "4037", "CreationDate": "2019-04-04T16:03:03.037", "Body": "

I have followed the tutorial for the AWS integration but cannot get the MQTT messages to display in the AWS IoT Core. I assume bridging is not working since I can get the messages in an external client from the TTN MQTT broker.

\n\n

I have followed the troubleshooting steps but there is nothing at all in the logs (neither in app-1.log nor app-1.error.log). I then used ssh to get into the EC2 instance and tried to manually launch the integration with

\n\n
/var/app/current/bin/integration-aws run\n
\n\n

It then crashed with the following error

\n\n
  INFO Initializing AWS IoT client              PolicyName=ttn-integration Region=\n FATAL Failed to get AWS IoT endpoint\n
\n\n

When setting the AWS_REGION=eu-west-1 environment variable, I get the following error:

\n\n
  INFO Initializing AWS IoT client              PolicyName=ttn-integration Region=eu-west-1\n  INFO Found AWS IoT endpoint                   Endpoint=an3cfmjmy6od4.iot.eu-west-1.amazonaws.com\n  INFO Created certificate                      ID=189326ec8e90c8abd0de2eef1fe6eb4a22c05495c49bcfe291860a6b45243acd\n FATAL Failed to attach policy to certificate   Certificate=189326ec8e90c8abd0de2eef1fe6eb4a22c05495c49bcfe291860a6b45243acd Policy=ttn-integration error=ResourceNotFoundException: Policy not found\n        status code: 404, request id: ff8df0e8-56f1-11e9-9368-e107e49fac68\n
\n\n

I assume there is some sort of configuration file missing but I cannot seem to find anything that mentions such a file.

\n\n\n", "Title": "The Things Network AWS IoT integration", "Tags": "|mqtt|aws-iot|lora|lorawan|", "Answer": "

First of all, trying to launch the integration from an SSH session is a bad idea (it is missing environment variables among other things).

\n\n
\n\n

With the old version of the integration (before 2.0.11), we somehow needed certificates with an associated policy for the things in the AWS IoT core. This bug is fixed in 2.0.11 (which is the current one at the time of writing this answer).

\n" }, { "Id": "4040", "CreationDate": "2019-04-04T16:48:01.723", "Body": "

I'm using IP camera - as an input for my IOT project.\nIP cam default IP is 192.168.1.10 and in order to use it in my network I need to change its default IP.

\n\n

My network address is 192.168.3.x. Seller instructions \"instructions\" are clear and reasonable, but my network address is not as his, causing IP SEARCH not to find this device.

\n\n

I know that it is connected as needed since my NVR detects camera, with exact IP, but changing IP cannot be done using NVR GUI...

\n\n

Any ideas ?

\n\n

Guy

\n", "Title": "Change IP address of IP CAM using CMS software", "Tags": "|digital-cameras|surveillance-cameras|ip-address|", "Answer": "

The I found quite quick and usefull- was like that:

\n\n
    \n
  1. Connect IPCAM directly to PC's LAN port

  2. \n
  3. Change Adapter settings ( via Control Panel ) to 192.168.1.X

  4. \n
  5. logoff

  6. \n
  7. Connect using web page with default ip 192.168.1.10 and change to desired IP.

  8. \n
\n\n

** this youtube tutorial helped a lot !

\n" }, { "Id": "4052", "CreationDate": "2019-04-09T21:55:01.143", "Body": "

I would like to connect a dumb lamp to my Hue/Homekit setup.

\n\n

The lamp is powered by G9 LEDs and, as far as I know, there are no G9 smart bulbs.\nI am also unable to exchange the physical wall switches.

\n\n

I tried finding a smart component that could be injected between ceiling outlet and lamp, but couldn't find anything.

\n\n

Does such a component exist?

\n", "Title": "Connect dumb lamp to Hue/Homekit without switch", "Tags": "|philips-hue|apple-homekit|", "Answer": "

There are products like the Sonoff range that when flashed with the Tasmota firmware can emulate a Hue Bridge which might work, but this issue implies native HomeKit support is unlikely.

\n\n

But once you have the controllable device other tools like Home Assistant might allow you to add HomeKit support.

\n" }, { "Id": "4054", "CreationDate": "2019-04-10T08:35:15.043", "Body": "

When one designs software for remote devices and IoT, one has to consider how the system manages various failures, be it software or hardware.

\n\n

If the system recognizes a SW bug, it may notify the cloud and revert to a boot loader.\nIf the system recognizes a HW peripheral issue, it may stop using it and notify the cloud.\nIf the system happens into a fault where it must question its own sanity - let's say, when the NVM is unreliable, it may require a complete shutdown.

\n\n

This is a very big and important issue, on which the rest of the SW should be built.

\n\n

I believe this issue to be common enough for guidelines, tutorials and literature to be written about, so we don't have to reinvent it on our own in each of our individual projects.

\n\n

I would like to know if there is recommended literature, tutorials or guidelines for designing remote device software for robustness, especially regarding image updates.

\n\n

Edit: The focus here is not on error detection, but on how to design a sandbox, in which errors and faults can be treated safely in an IoT device environment.

\n", "Title": "Error Handling Fallbacks In IoT Software", "Tags": "|over-the-air-updates|", "Answer": "

I have a question about FOTA which got no reply. So I researched & posted my own answer, so that you don't have to reinvent the wheel.

\n\n

You can either use RUAC which looks to be so good that it might be overkill, or you could work your way through (the most recommended) FOTA on GitHub.

\n\n

If you don't choose one, there is enough FOSS there that you can read the documentation & code to get a feel for how others do it, and establish your own guidelines.

\n\n

Please, if you find anything better, post it here, to help others. In fact, whatever you choose, please post it here. Thanks

\n" }, { "Id": "4056", "CreationDate": "2019-04-10T17:05:59.847", "Body": "

So I bought some \"Innr\" products, namely zigbee bulbs and plugs. For my hub, I decided to go the raspberry pi + hass.io route.

\n\n

After I bought everything however, I noticed, that there is actually no component on https://www.home-assistant.io/components/#search/ for Innr specifically.

\n\n

Does this mean, that Innr product won't work with home assistant?

\n\n

Innr is supposedly compatible with the hue bridge. Home Assistant has a component for the hue bridge.

\n\n

Would it work if I use the hue component for that?

\n\n

I kind of thought, that most zigbee products should work with my zigbee equipped raspberry pi, even if there isn't a specific component. Is this assumption true?

\n", "Title": "If I want to integrate Innr with home assistant, does there need to be a component for it?", "Tags": "|zigbee|philips-hue|home-assistant|", "Answer": "

TL;DR: Innr Zigbee light works with Home Assistant.

\n

The long version:\nAs with any other Zigbee light, there are two ways to connect it with Home Assistant:

\n\n

When using the second option, make sure to purchase a supported antenna.

\n

On my setting, I had a hue bridge gen 1 which I ditched after Philips *decided to reduce its functionality.\nI'm using Elelabs antenna connected to the raspberry pi. The installation was not straightforward but is documented very clearly.\nAll my Zigbee products work very well with it (hue, innr, tradfri, sonoff).

\n

Note that innr bulbs, at least the model I have, do not support the HA light integration option for transition.

\n

* Paul Hibbert about Philips decision to axe gen 1 (video)

\n" }, { "Id": "4062", "CreationDate": "2019-04-11T19:18:20.290", "Body": "

This may sound dumb, but are flood sensors, like the one from Xiaomi typically completely waterproof?

\n\n

And if they are, would they work if installed vertically, facing with the side downwards, instead of bottom downwards?

\n\n

What I have in mind is installing one of those in my bath tub and have it notify me, when my bath tub is full.

\n\n

Or are there actually better solutions for this?

\n", "Title": "Are immersion sensors typically waterproof?", "Tags": "|sensors|zigbee|xiaomi-mi|", "Answer": "

Of course, it may depend from sensor to sensor, but the Xiaomi Aqara advertised here is rated at IP67. If you look up the IP ratings and what they mean, you find that the first numeral digit refers to its imperviousness to solids, and the second refers to its resistance to water.

\n\n

A rating of 6 on solids means that it has:

\n\n
\n

Protection from contact with harmful dust.

\n
\n\n

A rating of 7 on liquids means that it is:

\n\n
\n

Protected from immersion in water with a depth of up to 1 meter (or 3.3 feet) for up to 30 mins.

\n
\n\n

Given that you're not submerging it anything like a meter, it should be fine at whatever angle you put it at - especially if you arrange it such that you can remove it from the bathtub once you get there to turn the water off. Leaving it sitting in the water for extended periods of time might not be the best, as it is only verified up to 30 minutes.

\n\n

Hope this helps!

\n" }, { "Id": "4094", "CreationDate": "2019-04-25T14:15:57.943", "Body": "

I have a multi-tenancy situation where I have created one IoT hub per tenant.

\n\n

Now if I have 5 tenants, and I create 5 Iot hub, should I also create 5 device provisioning service for those 5 IoT hubs (one for each)?\nSo that when I onboard devices on a large scale I can programmatically add provision configuration in the app (I will be creating 5 apps for 5 tenants because each tenant's requirements will be different).

\n", "Title": "Should there be one device provisioning service for one IoT hub if it is associated with one tenant?", "Tags": "|security|azure|provisioning|", "Answer": "

It seems a single Device Provsioning Service (DPS) instance is enough to handle this scenario. All I have to consider is the attestation mechanism, in order to identify the devices to DPS. An intermediate certificate of the tenant can be uploaded in the DPS enrollment list and the same certificate will be used for signing the device certificates which belong to that tenant.

\n\n

So even if the application of each tenant is different I need to have common properties of DPS encoded in the device during manufacturing/factory setup process so that on internet connection available it can start first communication to DPS to get the specific IoT Hub information, which will be available in the enrollment list of the tenant.

\n" }, { "Id": "4099", "CreationDate": "2019-04-29T09:45:43.997", "Body": "

I have SensorTag cc2650 from Texas Instruments with their android app installed in my Phone. I am getting an exception while connecting IBM Watson IoT. It worked fine with quick-start service but gives me exception when I connect it to my registered service with Platform.

\n\n

Working fine with quick-start:

\n\n

\"enter

\n\n

credentials that I have added in mobile App are:

\n\n

\"enter

\n\n

The exception that I receives:

\n\n

\"enter

\n\n

On IBM Watson IoT platform I have created device here is that:

\n\n

\"enter

\n\n

And this is screen-shot of IBM IoT Platform Dashboard:

\n\n

\"enter

\n\n

What is the issue? I read this recipe. There was procedure with screenshots for iOS but for android it was written that author will update android screen shots soon. but he hasn't updated yet.

\n\n

I also set TLS optional security as mentioned in this post put issue still persists.

\n\n

\"enter

\n", "Title": "Cannot connect SensorTag cc2650 to IBM watson IoT platform using their android app", "Tags": "|mqtt|sensors|", "Answer": "

I was able to solve it. Issue was resolved by following these steps:

\n\n
    \n
  1. I create a new device with type ti-sensortag2 from IBM watson IoT dashboard. instead of with typeiotsample-ti-cc2650
  2. \n
  3. passed d:5j6cf4:ti-sensortag2:546c0e5301e1 in device id.
  4. \n
  5. Last thing that I rectified was to mention my organization id in broker id which I passed in app . So this came out to be my new broker id tcp://5j6cf4.messaging.internetofthings.ibmcloud.com
  6. \n
\n\n

The new credentials for Cloud Setup in SensorTag app are shown in this screen-shot:

\n\n

\"enter

\n\n

after that I clicked push to cloud toggle button and it started sending data to my iot service:

\n\n

\"enter

\n\n

and I was able to receive data in recent events like this:

\n\n

\"enter

\n" }, { "Id": "4101", "CreationDate": "2019-04-29T13:39:31.763", "Body": "

I have a new Google Home Mini and I'm looking to add a lot of customization to it and the devices connected to it. The built in "routines" seems quite limited, the main thing I want to do is add delays into the routines but this is not a built-in function.

\n

I'm looking for a tool that would allow me to do add more customization especially delays. I'm a software developer so I'm open to programming or messing with config files but the simplest solution/tool would be best. As far as I can tell IFTTT does not have the tools to do what I want with my products.

\n

Here is an example of what I would like to do. I know how to do all of this except the delays.

\n

Example routines:

\n\n
\n

My system and tools at my disposal:

\n\n", "Title": "Add delays to Google Home routines", "Tags": "|smart-home|google-home|smart-assistants|", "Answer": "

Google finally added this as part of the home assistant app! It's now a default function.

\n

If you need a full guide with pictures I'll link some below. But simply put all you need to do is when you are making a routine add a new action, at the bottom of the list of action there is an "add delay" button. Just select this and set the time you want your delay to last. Then add the action you want to happen after the delay. You can repeat this to add as many actions and delays as you would like (there is likely some limit).

\n

Here are some detailed guides:

\n\n" }, { "Id": "4102", "CreationDate": "2019-04-29T17:31:20.557", "Body": "

I would like to monitor energy usage of various appliances in my house. I hope to put a plug adaptor in which records the power usage. These are available commercially very cheaply, but often only have internal recording features. e.g this one and the data is not recoverable.

\n\n

Preferably the adaptors would post the information to a central database as it is recorded \"live\" . But I am also happy for them to be recorded to a SD card or similar, which I can then analyse.

\n\n

Any advice or suggestions are appreciated?

\n", "Title": "Monitor Energy Usage", "Tags": "|power-consumption|", "Answer": "

TL;DR:

\n\n

One way to get what you want is to use WiFi smart plugs featuring power monitoring, which can be integrated into Node-Red. Node-Red can be run for example on a Raspberry Pi. From there you can also store the measured values in a database.

\n\n
\n\n

Long answer:

\n\n

I would suggest you look into smart plugs, which you can integrate into your LAN via WiFi. All of those plugs allow you to control them via WiFi and some also feature energy monitoring (at a higher cost for the plug, naturally).

\n\n

Typically, those smart plugs can be controlled via the manufacturer's app, which will also show you statistics about the energy usage, when the plug has that feature.

\n\n

However, often you can either update them with custom firmware (e.g. Sonoff products, which feature an easily reprogrammable ESP8266 chip) or they have APIs that allow you to control them without the need to use any cloud service via the manufacturer's app.

\n\n

One example are TP-Link smart plugs that I use in my home controlled via node-red.\nAnother example are tuya-devices, which are sold under a variety of brand names.

\n" }, { "Id": "4108", "CreationDate": "2019-04-30T12:04:04.493", "Body": "

My main goal is to connect devices but only with an access token and not by manually adding devices to the cloud IoT platform
\nIs there a platform to do that automatically? How to do it/Which platform is good for that purpose?

\n\n

why I asked this question?: I have used Node-RED, thinger.io, and thingspeak. However, in most of them, we should add device manually and make a device auth token is it possible to add a device automatically and subscribe to a topic(topic name is the same one) and get data without manually adding devices Basically, I need an automatic device registry either based on tokens or using username and password login.

\n", "Title": "connect a device automatically without configuring at iot platform end", "Tags": "|mqtt|open-source|", "Answer": "

You can use uBeac IoT platform.

\n\n

You should create a gateway and it will give you a unique URL (which you can change it later). Then, set the given URL in your device.

\n\n

You can configure the security options as below:

\n\n\n\n

For debugging purpose, you can send data without any security settings.

\n" }, { "Id": "4110", "CreationDate": "2019-04-30T16:01:35.477", "Body": "

Some documentation and videos regarding Device Provisioning Service (DPS) says it can handle multitenancy but, it seem there is a confusion about how one tenant's configuration/data is isolated from other tenants. lets say I have 5 tenants and each of them having 1000 devices which I need to onboard to IoT hub of their respective tenant (assume I have one IoT hub per tenant). Enrollment group is a perfect thing to do in this situation but then do I have to create 5 enrollment list (one per tenant) and configure all the devices and their attestation mechanism in the list? if this is the right way then is the \"attestation mechnism\" makes difference in isolation of registering tenant's devices?

\n\n

Some of the document also says tenant isolation is based on the ID scope of DPS, which means I need to create 5 DPS instances (one per tenant) and give this ID scope and registration URL in the registration software during manufacturing process? If that is the case wouldn't this be a mess to handle it at manufacturing step to encode ID scope for each tenant's device?

\n", "Title": "How enrollment list in Azure IoT Hub Device Provisioning Service isolate tenant specific configuration?", "Tags": "|security|azure|authentication|provisioning|", "Answer": "

The answer is more or less same if you have seen this question Should there be one device provisioning service for one IoT hub if it is associated with one tenant?

\n\n

Yes DPS isolates tenant configuration based on attestation mechanism. Let's say, if X509 certificate based attestation mechanism is used and there are 5 tenants. There will be single root CA for all the tenants and there will be unique intermediate certificate per tenant which are signed by this root CA. Now all the devices belong to specific tenant will use common intermediate certificate of the tenant for signing. This configuration should be created in DPS instance with one enrollment list per tenant.

\n\n

In short, when device communicates to DPS it will identify itself from the available intermediate certificate uploaded in DPS initially and depending on this DPS will make sure that the device belongs to particular tenant and allows it connect to specific IoT Hubs configured in the list.

\n" }, { "Id": "4111", "CreationDate": "2019-04-30T19:57:11.377", "Body": "

I have a NodeMCU (ESP8266) board that I want to control over the internet. I am trying to find a solution where I don't have to set up any configurations on my router like port forwarding. I came up with the following solution:

\n\n

\"enter

\n\n

I have a website where the user changes the device status (with status I mean for example GPIO5 pin value HIGH or LOW) which is then saved to a database on a shared hosting server. The NodeMCU sends periodically (for example every 5 seconds) a HTTP GET request to the database. According to the value that is received from the database the NodeMCU board changes the pin value to HIGH or LOW. If NodeMCU changes it's status (for example a pin value from HIGH to LOW) the new device status is sent to the database with a HTTP POST request. The device also sends a HTTP POST request periodically (for example every 60 seconds) so the user can monitor the device status on the website.

\n\n

There are a few problems with this configuration:

\n\n
    \n
  1. There is no real-time connection between NodeMCU and the user (there is always a delay in the device response)

  2. \n
  3. The device sends thousand of queries every day that are a load to the shared hosting server. For example if the GET request is sent every 5 seconds, that gives 17280 queries per day for one device.

  4. \n
\n\n

So my question is how practical is this configuration on shared hosting or any kind of hosting, what are the alternatives or improvements to this configuration and how to establish a connection with the NodeMCU so that the device sends a GET request to the database only when the device status is changed in the database by the user.

\n", "Title": "NodeMCU (ESP8266) board controlled over shared hosting database", "Tags": "|esp8266|", "Answer": "

Don't use HTTP, it is the wrong choice for this sort of thing.

\n\n

Use a messaging based protocol (e.g. MQTT) that way updates are pushed to the device rather than having it poll for them. This cuts down on bandwidth and you get (near) real time notification.

\n\n

The next question is where to run a message broker. Shared hosting (e.g. LAMP stacks) don't normally allow you to run brokers, but for something small moving the install to something like AWS lightsail will probably be cheaper anyway (but you will be responsible for setup/maintenance/security ).

\n" }, { "Id": "4120", "CreationDate": "2019-05-03T14:15:58.713", "Body": "

I want to know if the network to which esp connected has internet connectivity or not. How can I send a ping request using esp32?

\n", "Title": "How to test internet connectivity of network to which esp32 is connected?", "Tags": "|arduino|esp32|", "Answer": "

You can ping some address and see if it is reachable. You can use this library for the same.

\n" }, { "Id": "4121", "CreationDate": "2019-05-03T17:59:32.783", "Body": "

I am struggling with creation of multiple IoT devices (NanoPi like) which will be controlled via cloud server. I wish to use MQTT as I see this is the best method for small footprint. I know MQTT has certificates, passwords etc. \nBut I have no clue how to implement secure one-to-one communication between device and cloud. I might create one channel per device, but I'm afraid anyone who will access the device physically, will be able to join any other channel.\nI would create channels manually and assign user+pass for them, but doing so for multiple devices will be waste of time. So I'm stuck with two aspects:

\n\n
    \n
  1. Best implementation for Device <-> Cloud data protocol. Will MQTT over TLS be fine?
  2. \n
  3. How can I automatically restrict devices to use only one channel? Like each device will register itself in the cloud with its own ID, and will use this ID as channel. But how to prevent rogue attacker from joining any other channels?
  4. \n
\n", "Title": "IoT <-> Cloud communication method", "Tags": "|mqtt|security|", "Answer": "

I'm going to ignore your first question for the reasons mentioned in the comment.

\n\n

As for the second. MQTT messages are published to topics (not channels), nearly all MQTT brokers allow you to configure Access Control Lists (ACLs) the allow you to control which topics each user can both publish and subscribe to.

\n\n

With a correctly setup ACL the client will only be able to publish data to a specific topic and if needed subscribe to specific topic used to send commands to that users device.

\n\n

Most brokers also allow substitutions in the ACLs so you can set up template entries that match to any user. e.g. for mosquitto you can use %u to match to the username and %c to match to the clientid so a topic pattern might look like this:

\n\n

write %u/data/#\n read %u/command/#

\n\n

This would let user only publish to topics which start with their username followed by /data/... and subscribe to topics with [username]/command/... at the start.

\n\n

As a pub/sub protocol MQTT is sometimes considered a broadcast medium, e.g. one publisher to many subscribers, but there is nothing to say you can't use topics that only the central controller publishes to and only one device subscribes to in order to get 1 to 1 messaging. In the new MQTT v5 spec there is even the concept of reply messages so you can specifically do request/response type messaging.

\n" }, { "Id": "4125", "CreationDate": "2019-05-04T11:56:33.150", "Body": "

I decided I should start version controlling my Home Assistant configuration, and want to commit the unedited config first to see what changes I already made. Can I find a complete copy unedited of the config folder (and other relevant files if any) anywhere?

\n", "Title": "Where can I find the default configuration of Home Assistant?", "Tags": "|home-assistant|", "Answer": "

This official guide suggests that only the config directory should be version controlled and also a specific .gitignore. This, applied to a fresh install of Home Assistant 0.105.3, leaves five files:

\n\n\n\n
\n# Configure a default setup of Home Assistant (frontend, api, etc)\ndefault_config:\n\n# Uncomment this if you are using SSL/TLS, running in Docker container, etc.\n# http:\n#   base_url: example.duckdns.org:8123\n\n# Text to speech\ntts:\n  - platform: google_translate\n\ngroup: !include groups.yaml\nautomation: !include automations.yaml\nscript: !include scripts.yaml\nscene: !include scenes.yaml\n
\n" }, { "Id": "4133", "CreationDate": "2019-05-09T12:03:31.910", "Body": "

Description: I have a smart wrist band (link to Amazon) which has Bluetooth connectivity. \nMy goal is to read real time on my computer (running Ubuntu 18.04) some of the data that is tracked by the wearable device such as the HR or the number of steps. In other words, everytime the desired variable is recorded by the smart band it should also be displayed on my PC monitor.

\n\n

Question: Unfortunately I am a beginner in this topic and I have no clue about how to do it and if it is even possible. As consequence I would like to ask if you are able to provide some links to possible solutions where I can get some inspiration from? It would be nice that the suggested solutions involved some open-source.

\n", "Title": "Stream real time data from smart band/watch to computer via Bluetooth", "Tags": "|bluetooth|data-transfer|streaming|smart-watches|", "Answer": "

It looks like a pretty standard Bluetooth Low Energy (BLE) device so assuming the manufacturer hasn't done something strange (e.g. like fitbit) then you should be able to use any language that has BLE GATT support to connect to the device and then subscribe to the characteristics for each of the different data fields. For NodeJS there is the noble which is pretty for building this sort of thing.

\n\n

There are mobile apps like the nRF Connect that will let you interrogate the device and let you determine the UUIDs for the Service and Characteristics which will help get you started.

\n" }, { "Id": "4152", "CreationDate": "2019-05-15T12:53:34.140", "Body": "

We are trying to create a system that reads and performs some computation on data coming in via serial port (from a CAN network) and send the results to the cloud. I have been looking into AWS Greengrass and am wondering if it would be possible to create a device that does the processing/sending results to the Core, AND a Core that forwards the results to the cloud on the same machine (e.g. a Raspberry Pi)?

\n", "Title": "Can I run a Greengrass Core and an IoT Device on the same machine?", "Tags": "|aws-iot|aws|aws-greengrass|", "Answer": "

AWS Greengrass is designed to do IoT processing at the edge, rather than (or in\naddition to) sending it to the cloud.

\n\n

So, if you want to do some processing of your data at the edge, you can use any\nIoT Edge platform that fulfills your requirements. If Greengrass, Amazon documents\nwhere this runs. We have only tested it under Intel Linux.

\n\n

Else, if you just want to forward the data to the cloud, then you just want a\n\"gateway\" functionality that packages the data that your device is generating into\nthe format that your cloud platform wants. That is usually MUCH less effort than\nintegrating with an edge platform.

\n" }, { "Id": "4165", "CreationDate": "2019-05-20T15:44:14.860", "Body": "

They say Google home is an IoT device but it has the ability to control other devices like lights, Air conditioners, etc. (in smart homes) which is more like a base station in the Wireless sensor network. Or is Google home an integration of both the technologies together?\nI think IoT devices are connected to the internet directly. But in the case of smart homes, why are they using Google Home or Echo, etc. ?

\n", "Title": "Is Google home an IoT Device or a WSN base station?", "Tags": "|networking|sensors|amazon-echo|google-home|", "Answer": "

This largely depends on the definitions you choose. The criteria for what makes a device an \"IoT device\" is contentious at best. Most definitions would call a Google Home an IoT device if you carefully read through the definitions given, but not everyone agrees on the same definition, so it becomes rather nebulous.

\n\n

We can also look at how a wireless sensor network is defined, particularly:

\n\n
\n

The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server.

\n
\n\n

I would tend to say that a Google Home doesn't fit this definition well - a crucial aspect is that other sensors connect directly to the base station, like a hub. While a hub like SmartThings certainly does this, generally a Google Home doesn't directly link to devices and rather sends messages via remote servers.

\n\n

Ultimately you will need to consult the definitions you want to use to check - but remember that definitions aren't always agreed upon!

\n" }, { "Id": "4182", "CreationDate": "2019-05-28T10:10:09.197", "Body": "

My understanding is network protocols are BLE, Wifi, ZIgbee, etc. and messaging protocols are http, mqtt, etc. So my questions are:

\n\n
    \n
  1. Is my understanding so far correct?
  2. \n
  3. Are network and communication protocol same and used interchangeably or they mean something else?
  4. \n
\n", "Title": "Difference between Network, Communication, and Messaging protocol", "Tags": "|networking|communication|protocols|", "Answer": "

For simplicity you are correct. However, those \"messaging\" protocols are typically only relevant at the IP layer which again for the sake of simplicity typically is understood as the WAN endpoint. When implementing BLE, LoRaWAN, ZigBee you would typically use the read/write/notify/indicate (or equivalent) defined by the standard. The processing overhead of implementing MQTT over BLE would remove much of the benefit of BLE (I won't go into MQTT-SN..) Typically you would transmit local data natively and use a less energy efficient base station (gateway) to reformat the data (JSON/CSV/etc) before publish/POST/etc.

\n\n

There are so many possible implementations that it's near impossible to set a gold standard.. it's about understanding the tools well enough to pick the best combination for the job.

\n" }, { "Id": "4194", "CreationDate": "2019-05-29T14:59:03.370", "Body": "

I am quite new to Z-Wave. I have a gateway, 2 z-wave sockets and 3 light switches. Everything comes from Keemple brand if that helps. However, using Keemple I cannot automate things the way I would want. I am a programmer so writing scripts in any language is not a problem for me. I was thinking about making use of my raspberry pi. I would like my lights to automatically turn off when my phone disconnects from the Wifi network.

\n\n

I am thinking of 3 possible scenarios:

\n\n
    \n
  1. RPI becomes a z-wave gateway and I pair all devices with it. The only problem is that all devices are currently paired with Keemple gateway. Is it possible for one device to be connected to more than one gateway at a time? Once All z-wave devices are connected to RPI and the gateway, I could communicate directly from RPI and do some scripting there.

  2. \n
  3. RPI becomes a z-wave device (sensor). It would have a boolean values that would represent the states that I am interested in. Those states would contain values that would come from the scripts I would write. I would need to pair it somehow (not sure if possible) with my Keemple gateway and set up a scene for switching off all lights when phone becomes unreachable.

  4. \n
  5. RPI communicates with the gateway over TCP/IP. However I am not sure if it is possible. I learned about MQTT, but I cannot figure out the way to connect to my gateway over that protocol...Possibly it is only available for Keemple cloud and not for me to use.

  6. \n
\n\n

Are both scenarios possible? Which way should I go to achieve what I need?

\n", "Title": "How can I use Raspberry Pi to communicate with my Z-Wave gateway?", "Tags": "|mqtt|raspberry-pi|zwave|", "Answer": "

I learned that the best way is to turn RPI into a Z-Wave gateway using Z-Wave controller (USB device that can communicate with Z-Wave devices). It is important to note that one Z-Wave device can be connected to only one network at a time, so in my case I need to disconnect all devices from my current network and connect to the new one.

\n\n

One of the good open source choices to turn your RPI into Z-Wave controller is Domoticz (http://www.domoticz.com/). Once this is set up one can connect devices and configure own scenes. Domoticz supports other protocols, not only Z-Wave, so it is a good way to integrate many devices with one home automation system.

\n" }, { "Id": "4221", "CreationDate": "2019-06-11T08:52:31.887", "Body": "

I am using a nordic BLE SOC: nrf52 dk, along with s130 and sdk 14.2.0\nI want to secure an advertised BLE packet with AES CCM encryption.\nThe board contains a co-processor for AES calculation that is not accessible while using the soft device that I am using to generate packets and advertise.

\n\n

The solution is to use timeslot API but I do not know how to do it. There is a tutorial but it's only valid for nRF5 SDK 11.

\n", "Title": "How to set up the Timeslot API for AES-CCM peripheral use", "Tags": "|bluetooth-low-energy|", "Answer": "

I have posted this question in nordic devzone and I got this answer:

\n\n
\n

https://devzone.nordicsemi.com/f/nordic-q-a/48518/softdevice-handler-h-is-missing\n The same question has been answered before so I am posting the other answer as well, as it has longer discussion:\n https://devzone.nordicsemi.com/f/nordic-q-a/48025/not-finding-softdevice_handler-h-in-nrf52_sdk_15-0-3

\n
\n" }, { "Id": "4251", "CreationDate": "2019-06-22T13:28:33.677", "Body": "

I have a MQTT client in an internal network and an MQTT server somewhere in Cloud. How can I connect to MQTT Server without opening port in the Client network?

\n", "Title": "Connect to MQTT Server without opening port", "Tags": "|mqtt|", "Answer": "

You don't need to open any ports to connect to an external broker on a normal NAT'd internal network (e.g. a normal domestic ADSL setup.).

\n\n

As long as your network allows all outbound connections (and related replies) then it should all just work.

\n\n

This is because all MQTT connections are initiated by the client and are then persistent until the client closes the connection. Messages for subscriptions just flow back down this existing connection.

\n\n

If you need to explicitly allow outbound ports then the default port is 1883.

\n\n

If you are on a more locked down network, e.g. a corporate network that requires you to use a proxy to reach the outside world then you have 2 choices.

\n\n
    \n
  1. You need an OSI layer 5 proxy e.g. socks
  2. \n
  3. If you only have access to a HTTP proxy then you have to hope that your external broker and client supports MQTT over Websockets.
  4. \n
\n" }, { "Id": "4260", "CreationDate": "2019-06-26T19:38:25.217", "Body": "

My brother is my neighbor and we are saving some money sharing internet, but I don't want to share anything else, just internet.

\n\n

What I did is to pass a cable through the wall that connects his router \u2014that has access to internet\u2014 to my router. I created my own wifi name and password.

\n\n

Today my brother-in-law has been listening to online radios and I had the notification in my android phone telling me that someone was casting in a wireless sound system.

\n\n

I checked my router entering its IP in the explorer url box and saw that someone \"unknown\" was connected to my network.

\n\n

I would like anything connected to my wifi to stay in a different network, being not able to see what my brother is doing (and vice-versa)

\n\n

How should I start?

\n", "Title": "Router connected to a router: how to cast different things in each router?", "Tags": "|routers|chromecast|", "Answer": "

One way to get some isolation between your wired networks is for each of you to have your own dedicated router. The WAN port of each of your routers is wired to a LAN port of the ISP's router.

\n\n

To the ISP router, there are only two clients, Router You and Router Brother. Each of the You and Brother routers are configured as normal for a router. They have their DHCP servers enables. They perform DNS caching. They have their WiFi on with your choice of SSIDs and credentials, different for each of you. When a request from your side is made, it is NAT'ed out of your router and into the common ISP router. There, it is NAT'ed again and presented to the wild Internet. Same for your brother.

\n\n

There is no internetworking between you and your brother. You share an ISP connection.

\n\n

Although anything is probably hackable, short of exploiting a bug in the router, neither of you can probe the other's network.

\n" }, { "Id": "4273", "CreationDate": "2019-06-28T18:48:49.397", "Body": "

I would like to understand why we use ARM for routers, cell phones, cameras, refrigerators, smart TVs, and everything, instead of using any other architecture like x86.

\n\n

What are the advantages of using ARM for these things?\nWhat would be the problems of simply using x86? Is it all about cost, size and energy?

\n", "Title": "Why does the ARM architecture dominate the IoT market?", "Tags": "|hardware|standards|arm|", "Answer": "

This is a very good question, I'd like to offer my point of view in the matter.

\n\n

Arm has designed their processor with the embedded world as target, so they thought about every thing with this target in mind:

\n\n\n\n

I'm mostly working with Linux, and when you're developing product with Arm it's way easier than with x86. Every thing is in place and ready to make you gain some time. First their is a huge community, and you'll find plenty of resources to help you when you're stuck. And also the fact that it's the industry standard so you won't struggle with anything too exotic when working with arm, you'll have all the drivers and any kind of eval boards, SoCs and SoM that you would need. Add in top of all that that almost all embedded engineers know their ways with arm, so if you want to push with another architecture you'll have to really have a good technical reason.

\n\n

Companies that use other architecture do so mostly in legacy of former product and because of the company engineer's knowledge.

\n\n

To sum up, I think that arm is the easiest choice when developing a new product, but you can also have good reason to use other architectures (legacy or very specific needs for the product that are available only in a specific architecture).

\n" }, { "Id": "4283", "CreationDate": "2019-07-02T15:27:01.690", "Body": "

What's the proper hence best way to push updates to a Raspberry-pi with a limited Mobile Data Cap (1mb/ day).

\n\n

My first plan was to set up a Git repository and pull files from it once a week for example. The problem with that is that it's not frequent enough and still pull the whole repository even when no change is made which might eat up data.

\n\n

I am thinking about something more precise where I can see which file changed and download only that specific files.

\n\n

Ssh into the device is not an option as there will be around a hundred of them.

\n\n

Any suggestion?

\n\n

thanks,

\n", "Title": "remotely update a raspberry pi with limited data", "Tags": "|raspberry-pi|data-transfer|mobile-data|over-the-air-updates|", "Answer": "

Given the answers in the comments then using git over ssh is probably the best option.

\n\n

Firstly git will only pull the differences between the current head hash and the head on the remote. Given you are pulling updates to python scripts these should just be text files so the diffs should be simple.

\n\n

I say pulling over SSH because you can enable compression on the whole link in the client (and server) settings which help to reduce the traffic size.

\n\n

SSH means you can also install a ssh key on each machine that is locked down to only allow read only access to the git repo which should help to limit the security exposure.

\n\n

Rsync also only transfers the differences between files by default and can also run over SSH so can use the built in compression as well. But Rsync is going to need to send file size/date/checksum info for each file which is likely to consume more network resource than just checking if the git head hash has changed when there hasn't been an update.

\n" }, { "Id": "4284", "CreationDate": "2019-07-03T05:59:38.033", "Body": "

I'm in the process of building an IoT device using ESP8266. The device will eventually contain a couple of motors, and I would like to control these motors using MQTT. I would like to make the device as cheap as possible, so I would like to avoid things like displays and keyboards.

\n\n

So, when the device is turned it is supposed to connect to the local WiFi, and then to an MQTT broker. But how does it know about the local SSID to connect to, and what about username and password? Since the device has no display or keyboard, there is no way to input these things. And how does the user know if the device was able to connect or not? For troubleshooting, it would be nice if the device at least had some way to indicate what the problem might be.

\n\n

The solution I have thought of is to have one button and one LED on the device. The button would be marked \"config\" or similar. When the button is pressed, the device will start operating as a WiFi access point with a predefined SSID. It will have a webserver, so the user can connect with a laptop or phone to this predefined access point and enter the local network settings (SSID, username and password) as well as the address for the MQTT broker. The LED will be used to indicate the mode of operation, and also as error indication. For example, when the LED is glowing steady everything is connected, long flashes means it is in config mode, short flashes means there is an error, or something similar.

\n\n

My question is, is the solution I have proposed a standard way of doing things when it comes to this problem? I.e. will it feel like a familiar flow to the user, or would some other way be better? After doing a little bit of searching I have found e.g. this, which seems to be a similar user flow. Would still be interested in hearing what experience you have of this, and hear how you would solve it?

\n\n

If it turns out that this is a good and useful way to solve this problem, and since it is a kind of generic solution, it would be nice to not reinvent the wheel too much. I'm thinking that there could be a library that could do all of this. The library would be configured with the input pin for the button and output pin for the LED, and then take care of the rest. It could be built upon the PubSubClient library and based on the tutorial ESP8266: Connecting to MQTT broker. So is there a library that does this or something similar? If not I'll take a stab and create my own, but would like to hear about what's out there first.

\n\n

Thanks!

\n", "Title": "What is a good way for an IoT device to receive its network settings?", "Tags": "|wifi|esp8266|arduino|", "Answer": "

There is also the luftdaten.info project which is a open-source particle sensor with its own firmware. They do a similar thing to what you proposed, only without the config button. They do so, by starting the web server per default, when the device is powered on. After a certain threshold (I think it's somewhere between 3-10 minutes), the internal web server will be shut down - so no more configuration is possible until the next power cycle.

\n\n

This solution might be too insecure for certain scenarios, but you might want to know about nevertheless.

\n\n

Edit:

\n\n

To get the initial configuration into the device, the following procedure is in place:

\n\n

When the device is booted, it tries to reach the configured WLAN (there is no configured WLAN at the first startup). If it fails to connect to the pre-configured WLAN, it sets a static IP and spans it's own wireless network without a password, where one can connect to and do the initial configuration via the static IP address.

\n" }, { "Id": "4287", "CreationDate": "2019-07-03T15:30:04.737", "Body": "

I am developing on an ESP32 devboard (esp8266, wroom). I need to get the partition table of the currently running device.

\n\n

This document contains a quite good documentation of its partition table. I can also read/write flash regions by the esptool.py and parttool.py tools. These can manage and modify the table well.

\n\n

However, I did not find a way to read its partitions. How can I do it?

\n", "Title": "How can I list the partition table of a currently running esp32 devboard?", "Tags": "|esp8266|esp32|flash-memory|", "Answer": "

Option 1: idf.py
\nIf you have an esp-idf project you can simply run this command. You must be in your project folder, e.g. 'c:\\my-esp32-projects\\sample-project'

\n
idf.py partition-table\n
\n

and the partition table will be print in console like this:

\n
Partition table binary generated. Contents:\n*******************************************************************************\n# ESP-IDF Partition Table\n# Name, Type, SubType, Offset, Size, Flags\nnvs,data,nvs,0x9000,24K,\nphy_init,data,phy,0xf000,4K,\nfactory,app,factory,0x10000,1M,\n*******************************************************************************\n
\n

Option 2: Espressif-IDE
\nUse the Espressif-IDE, right-click on your project and choose ESP-IDF: Partition Table Editor, this way you have a graphic window where you can view/edit the partition table for your esp32 application\n\"enter

\n" }, { "Id": "4295", "CreationDate": "2019-07-04T14:59:05.730", "Body": "

I have created a Publisher Script using Paho MQTT JavaScript API which publishes values to two topics MyHome/Temp and MyHome/Hum. The script is running successfully and publishing data to CloudMQTT broker. In my Subscriber script I have subscribed to these two topics and printing them in Console as following:

\n\n
function onConnect() {\n  console.log(\"onConnect\");\n  client.subscribe(\"MyHome/Temp\");\n  client.subscribe(\"MyHome/Hum\");\n}\n\nfunction onMessageArrived(message) { \n  console.log(message.destinationName +\" : \"+ message.payloadString);\n}\n
\n\n

It is printing both the topic names and corresponding values.\nNow I want to extract the values of both topics using message.payloadString and store in variable as following:

\n\n
function onMessageArrived(message) { \n  var temp = message.payloadString;\n  var hum = message.payloadString;\n  ...\n}\n
\n\n

But I am getting only on value in both variable i.e. the value of last topic 'hum'. Can anyone please help me solving this.

\n", "Title": "How to extract values of multiple topics in onMessageArrived(message) function of Paho MQTT JavaScript API?", "Tags": "|mqtt|eclipse-iot|", "Answer": "

i found different solution to this in python

\n

client.py

\n
 class Messages(threading.Thread):\n    def __init__(\n        self,\n        clientname,\n        broker="127.0.0.1",\n        pub_topic="msg",\n        sub_topic=[("msg", 0)],\n    ):\n        super().__init__()\n        self.broker = broker\n        self.sub_topic = sub_topic\n        self.pub_topic = pub_topic\n        self.clientname = clientname\n        self.client = mqtt.Client(self.clientname)\n        self.client.on_connect = self.on_connect\n        self.client.on_message = self.on_message\n        self.client.on_subscibe = self.on_subscribe\n        self.received = {}\n        self.topicTemp = "" <==== define temporary topic name\n\n    def on_connect(self, client, userdata, flags, rc):\n        if rc == 0:\n            print("Server Connection Established")\n        else:\n            print("bad connection Returned code=", rc)\n        self.client.subscribe(self.sub_topic)\n\n    def on_subscribe(self, client, userdata, mid, granted_qos):\n        print("Subscription complete")\n\n    def on_message(self, client, userdata, msg):\n        if self.topicTemp != msg.topic:  <=== compare incoming topic name\n            self.received[msg.topic] = {  <=== create new element of received dict\n                "topic": msg.topic,\n                "payload": str(msg.payload.decode()),\n            }\n        self.topicTemp = msg.topic <== store handlede topic to compare next one\n\n    def begin(self):\n        print("Setting up connection")\n        self.client.connect(self.broker)\n        self.client.loop_start()\n        # self.client.loop_forever()\n\n    def end(self):\n        time.sleep(1)\n        print("Ending Connection")\n        self.client.loop_stop()\n        self.client.disconnect()\n\n    def send(self, msg, topic=None):\n        if topic is None:\n            topic = self.pub_topic\n        self.client.publish(topic, msg)\n\n    def get(self, msg, topic=None):\n        if topic is None:\n            topic = self.sub_topic[0]\n        self.client.subscribe(topic, msg)\n        # self.send(msg, topic)\n\n\ndef main():\n    remote = Messages(clientname="PC", broker="127.0.0.1")\n    remote.begin()\n\n\nif __name__ == "__main__":\n    main()\n\n
\n

function.py

\n
    from mqtt_client_test import Messages\nimport time\n\nremote = Messages(\n    clientname="Camera",\n    broker="127.0.0.1",\n    pub_topic="pc/camera",\n    sub_topic=[("pc/camera", 0), ("commands/detect", 1)],\n)\nremote.begin()\nwhile True:\n    msg = remote.received\n       # print(msg)\n    if "commands/detect" in msg:\n        print("command", msg["commands/detect"]["payload"])\n    if "pc/camera" in msg:\n        print("camera", msg["pc/camera"]["payload"])\n    time.sleep(1)\n    \n
\n

Now you can get payloads from different topics in one on_message function and can extract how you want to use somewhere else...

\n" }, { "Id": "4298", "CreationDate": "2019-07-05T05:01:36.837", "Body": "

LoRa (Long Range) technology is one of the promising technology which offers long range communication with low power consumption.

\n\n

Therefore when building a device that can be deployed in a no network area or to achieve long distance such as 5km or more with low power consumption feature it is considered to be a suitable technology.

\n\n

However if I am planning to build a system that should assist sending images to remote location over long range or in no network area (i.e. rural area, agricultural fields), what is the better option?

\n\n

Is it a good practice to use LoRa for sending images? Since it has low bandwidth, how can a large image be sent? What is the maximum size that could be sent?

\n\n

In detail, I want to capture images of crop from the field and send it for analysis on cloud server. So if I am sending image (1-2MB size) in small parts, how many attempts will it take if the distance between sensor node and gateway is around 2KM? How can I assure that all parts will be transmitted successfully?

\n\n

Practically I have seen that while out of 10 packets of simple text message, 1-2 packets are usually lost even in closest distance of 10 ft. I don't know, may be due to I have not yet used gateway as receiving and rather that I have used SX1278 Lora module as both sending and receiving devices in 1-1 communication.

\n", "Title": "Is it a good option to send images over LoRa network?", "Tags": "|networking|lora|lorawan|", "Answer": "

For transporting image data:

\n

If crops to be monitored for difference over time, then extracted images can be diffed and only the difference-patch can be uplinked.

\n

Currently, lora website has given a document for fragmented data block transport. For this question, subsequent images can be seen as a patch and class-C fragment data transport using a Class-C multicast session can be used to uplink the diff-image.

\n

https://lora-alliance.org/resource_hub/lorawan-fragmented-data-block-transport-specification-v1-0-0/

\n" }, { "Id": "4303", "CreationDate": "2019-07-07T22:04:37.157", "Body": "

I'm testing an MQTT client setup with the Eclipse test server. I noticed off is sent automatically to every topic my client subscribes to, and the retain flag is set. I can see the logic behind it but so far, I haven't found this broker feature documented anywhere so I was wondering if anyone has had a similar experience or has more details about any documentation for this broker. Is it intended or is the public broker topic-space just global and I'm getting someone's messages?

\n", "Title": "Eclipse broker publishes off on subscribe", "Tags": "|mqtt|eclipse-iot|", "Answer": "

Without seeing your code so we know what topics you are subscribed to this is really hard to answer.

\n\n

But, yes this is a totally public broker with a single shared topic space for all users. So it is very likely that you are receiving a message published by a previous user.

\n\n

It should really only be used to test MQTT client implementations and not for anything of value since at any time anybody could publish to any topic or it could go down (the eclipse broker is set to change URL and implementation soon [July 2019])

\n" }, { "Id": "4314", "CreationDate": "2019-07-10T17:24:35.920", "Body": "

I have what may be a crazy idea, which was inspired after both my neighbours had deliveries of online purchases swiped off their porch...not cool. I want to build a box for couriers to put stuff in, and I want to put a lock on it. I have read about smart locks that can have temporary passwords set (some examples may be found here). Now, my question is has anyone done this, and would these locks need to be tucked out of the elements (like under a roof) so that rain and snow (big issue where I live) wont get on them?

\n\n

Now, being a new contributor to this page, I don't know if this is a place for product recommendations - so what I am asking for specifically is things to look out for or design considerations other than \"box\" which is all I have in mind as of now.

\n", "Title": "Smart Lock | Exterior Use", "Tags": "|smart-lock|", "Answer": "

Many of these types of product are intended for use on exterior doors so should be weather proof (when installed correctly).

\n\n

I would point out that they are probably intended to be installed vertically (e.g. in a front door) rather than horizontally (e.g. in the top of a box) where water might pool on the surrounding surface or the face of the product

\n" }, { "Id": "4318", "CreationDate": "2019-07-11T05:33:30.203", "Body": "

I am switching to Platform IO as my ESP32 IDE. Alas, none of my boards are supported by the debugger. See https://docs.platformio.org/en/latest/plus/debugging.html#piodebug

\n\n

To save me along time searching, does anyone know of an ESP32 with built in display which is supported? I don't care which board, all that I need is WFIi and BT (or LoRa) and a built in display.

\n\n

Please mote that I am not asking for \"best\" or anything opinion based. The accepted answer will be the first posted. I am hoping that one of you is already using the PlatformIO debugger on an ESP32 with built-in display.

\n", "Title": "Seeking ESP32 with display which is supported by the PlatformIO debugger", "Tags": "|esp32|platform-io|", "Answer": "

I also bought myself an ESP-WROVER-KIT-VB.

\n

It's pricey, $39.99 on AliExpress, but, it has a large screen. Here it is, playing Doom !

\n

\"enter

\n

Here's a youTube link to the vieo.

\n

The best, though, IMO, is the onboard debugger, which lets me load my program to the IDE (Visual Studio Cod + PlafotmIO), and set and run to breakpoints, examine he call stack when they hit, plus examine & change variable values, etc.

\n
\n

Description:
\nThe ESP-WROVER-KIT from Espressif supports the most distinguishing features of the ESP32. Whether you need external SRAM for IoT applications, or an LCD+camera interface, it has you covered!
\nThe ESP-WROVER-KIT is a newly-launched development board built around ESP32. This board comes with an ESP32 module already. The V3 version of this kit now comes with the ESP32-WROVER. This module come with an additional 4 MB SPI PSRAM (Pseudo static RAM)
\nThe ESP-WROVER-KIT features support for an LCD and MicroSD card. The I/O pins have been led out from the ESP32 module for easy extension. The board carries an advanced multi-protocol USB bridge (the FTDI FT2232HL), enabling developers to use JTAG directly to debug the ESP32 through the USB interface. The development board makes secondary development easy and cost-effective.

\n

Features:
\nESP32 is engineered to be fast, smart and versatile. The ESP-WROVER-KIT complements these characteristics by offering on-board high speed Micro SD card interface, VGA camera interface, as well as 3.2\u201d SPI LCD panel and I/O expansion capabilities.
\nBogged down by bugs. The ESP32 supports JTAG debugging, while the ESP-WROVER-KIT integrates a USB debugger as well. This makes debugging and tracing complex applications very easy, without the need for any additional hardware.
\nHave you been developing your applications around the ESP-WROOM-32 module? Not only does the ESP-WROVER-KIT support the popular ESP-WROOM-32 module, but it also supports the new ESP32-WROVER module!

\n

Specifications:
\nDual core 240 MHz CPU;
\nwith 4 MB SPI PSRAM (Pseudo static RAM);
\nBuilt-in USB-JTAG Debugger;
\n3.2\u201d SPI LCD panel;
\nMicro-SD card interface
\nVGA camera interface
\nI/O expansion
\nWiKi: http://wiki.52pi.com/index.php/ESP32_WROVER_KIT_SKU:_EP-0090

\n
\n" }, { "Id": "4321", "CreationDate": "2019-07-12T07:46:59.697", "Body": "

Can LoRa end points communicate directly?

\n\n

Or is a gateway/network server always required?

\n", "Title": "Can LoRa end points communicate directly?", "Tags": "|lora|", "Answer": "

Yes, LoRa can do point to point communication. There are many examples of people using LoRa between 2 devices for low power, low bandwidth notifications between devices e.g. https://www.youtube.com/watch?v=WV_VumvI-0A

\n\n

The hub/spoke topology is normally associated with the LoRaWAN implementations (e.g. The Things Network) used for building wide area support for devices to communicate with a cloud backend.

\n" }, { "Id": "4334", "CreationDate": "2019-07-15T15:33:32.703", "Body": "

I am quite new to the Internet of Things, and my wife bought a TV today (a pretty good deal) from Amazon Prime Day - I included the link so that we are talking about the same device. Now, in the home already we have a Chromecast (Gen 1), and bought but not installed yet locks for the front and rear doors. All of which can be controlled using Google Home. We have however, not bought a device like a Google Home to work as a Hub (I worked for a supplier to Huawei among others, while living in China and know what a lot of data collected is used for, so I am a littler paranoid about that kind of tech).

\n\n

Also, please note that as we continue to do renovations in our home we will be adding ceiling fans, thermostats, cameras, etc. all of which ideally could be linked and controlled through Google Home.

\n\n

My question is this: in order to connect all of these (current and future) devices do I need to buy a \"Hub\"? I have read online and haven't been able to find a definitive answer. My main concern right now is being able to control the TV from my phone without having to use a different app (Roku) as I like the simplicity of the Google Home App.

\n\n

In short - should I invest in some type of \"Hub\" or do I not need to?

\n", "Title": "Connecting Devices In Home", "Tags": "|smart-home|google-home|google-assistant|smart-assistants|smart-tv|", "Answer": "

Google Assistant is not something you buy, you buy devices that support Google Assistant to varying degrees (e.g. a Nest Home Hub Max supports just about everything possible, where as a Google Home Mini supports less because it doesn't have a screen or a camera).

\n\n

Nearly all modern Android phones support Google Assistant, which will respond to the \"OK, Google\" keyword or via text, squeeze (pixel phones), custom button. All the features that you could trigger via a Google Home device can be triggered by interacting with the Google assistant on the phone.

\n\n

You can also install the Google Home app (which is needed to set up new devices anyway)

\n\n

As for the TV, I can't see any mention of Android TV support in the listing, so it won't have any direct integration with the Google Assistant Smart Home control. I have to assume that you will be plugging the Chromecast into a HDMI port. Assuming that at least one of the HDMI ports supports HDMI CEC then you will be able turn the TV on/off and switch to the Chromecast input (but not away from it).

\n\n

Voice control of playing content will depend on what services you are signed up for. By default the assistant can search YouTube for video but \"Browse\" tab (3rd from the left at the bottom of the screen) in the Google home app offers me content from all the video apps I have installed (on the phone) that support casting to a Chromecast device.

\n\n

Currently all control of devices (except Chromecast) are routed via the cloud from the Google back end to the device manufacturer's system then to the device, but Google have recently announced support for local control, where commands will be issued from across the local network to devices, it is not yet clear if this will also be possible via a phone or will require a device like a Google Home. (The stated reason for this is to reduce latency and it will fall back to cloud control)

\n" }, { "Id": "4337", "CreationDate": "2019-07-15T21:01:38.003", "Body": "

I am new to ESP8266 boards and IoT programming and I don't know how to describe better what I want to do without a picture.

\n\n

Question: How to send continuous data from 4 esp8266 WiFi clients to a webpage or to an application which handles all of them in parallel without introducing noticeable delays?

\n\n

So, I need 4 clients (users), each having an ESP8266 ESP-12E NodeMcu CP2102 board with 3 sensors (analog /digital sensors). These nodes should send continuous data or at least faster than human average reaction time (i.e. 250ms). \nThese clients should behave as players do in a multiplayer online game for example. Or I don't know, maybe like 4 WiFi Gaming Controllers.

\n\n

I need the system to be very reactive (without having a noticeable delay in displaying the data). For example, if User4 touches the Sensor3 (which can be also a simple button), the webpage / the application should sense this immediately while still displaying/handling the data from the other users. That's why I want to send the data to the webpage or to the application (C# app or Android App) as a long string even though some users are inactive or if their sensors inputs didn't change during a frame.

\n\n

Sensor1 can be a pulse sensor, Sensor2 a microphone and sensor3 a button or a touch sensor. It doesn't matter so much and I'm not decided yet on this.

\n\n

My problem is that I don't know how to approach this. I did some tutorials with WebSockets and one tutorial with MQTT, but I'm still very, very confused. I don't know if it's possible.

\n\n

One WebSocket tutorial I've followed is this one (I also found some typos in the code):
\nhttps://esp8266-shop.com/blog/websocket-connection-between-esp8266-and-node-js-server/

\n\n

The MQTT tutorial:\n
https://esp8266-shop.com/blog/configure-mqtt-runing-on-esp8266-for-home-automation/\n

\n\n

1400mAh LiPo Batteries

\n\n

3.7V to 5V @ 2A DC-DC boost convertors

\n\n

\"Multiple

\n", "Title": "How to send continuous data from a few ESP8266 to a webserver?", "Tags": "|mqtt|esp8266|web-sockets|", "Answer": "

MQTT is probably the right answer.

\n\n

Each ESP can publish to the broker with a topic structure something like:

\n\n
client1/sensor1\nclient1/sensor2\n
\n\n

You can then use MQTT over Websockets to subscribe to all the topics (either each topic separately or with the # wildcard).

\n\n

Since each message from the sensor arrives on it's own topic you can then use the topic to determine which bit of the page to update.

\n\n

For desktop or mobile apps you can either use native MQTT or MQTT over Websockets

\n" }, { "Id": "4349", "CreationDate": "2019-07-19T08:35:46.333", "Body": "

Does MQTT-SN provide any encryption like AES-128 etc.? As I know, it's using a simple binary encryption but in my opinion this cannot be seen as a security feature. Many thanks in advance :)

\n", "Title": "MQTT-SN using any encryption?", "Tags": "|mqtt|", "Answer": "

No, MQTT-SN doesn't supply any encryption at a protocol level, but you are free to encrypt the payload yourself how ever you want.

\n\n

Also since MQTT-SN doesn't make any requirements on what transport is used to carry the packets, but you could build a system that uses DTLS (similar to how CoAPS works)

\n" }, { "Id": "4366", "CreationDate": "2019-07-23T10:51:11.337", "Body": "

Recently I have been building some projects using the Wia Dot One and the PIR module which I got with it. I am still new to electronics so I am using the \"Blocks\" feature on the Wia.io platform.

\n\n

Currently I am trying to receive events on the Wia platform whenever motion is detected by the device. The code seems to upload fine, I know because the Dot One behaves as it did during previous successful deployments. However the events do not appear on the platform. I am working with the blocks arranged as shown below:

\n\n

\"enter

\n\n

Is there anything I might be missing to get this to work?

\n\n

Thanks for your help!

\n", "Title": "Wia Dot One - PIR Events are not created?", "Tags": "|wifi|", "Answer": "

I read over your blocks, it seems to me that you have not included the \"Connect to Wifi\" block at the start of the block sequence. I included it in my own example and it worked perfectly.

\n\n

\"enter

\n" }, { "Id": "4372", "CreationDate": "2019-07-24T14:48:05.630", "Body": "

We are building a home and I'm thinking to make the home a little smart. This by adding WiFi light switches and maybe a couple of other sensors.\nNow I'm looking for a light switch that can be build in the wall (like a normal light switch) and that can also be operatate like a normal light switch. Does someone knows a light switch that can do that? Also it should be integrated in Home Assistant software.

\n", "Title": "Build in light switches", "Tags": "|smart-lights|", "Answer": "

I found a good working system and added some Sonoff miniR2 behind some light switches. The Sonoffs are connected using MQTT (I installed Tasmota firmware on the Sonoff MiniR2) to my Home Assistant instance and there I can now switch some lights and I can also use the light switches on the wall.

\n" }, { "Id": "4375", "CreationDate": "2019-07-25T06:09:40.957", "Body": "

Tl;dr- how can I get Windows 10 to recognize my IoT boards?

\n\n

I bought a Heltec ESP32, installed the Arduino IDE, loaded a demo program to the board, ran it, edited it, loaded the new version and ran it.

\n\n

The I decided to try PlatformIO and repeated the process. Everything ran smoothly.

\n\n

Then \"something happened\" - I don't know what (\"I didn't change anything\" ;-) and suddenly PLatformIO would not load a program.

\n\n

I looked at the Windows device manager and saw two COM port devices. Perhaps stupidly, I removed them and detected new devices. Nothing. Then I attached my ESP32 and saw only one COM port device.\n\"enter

\n\n

I uninstalled the Arduino IDE and reinstalled it,but, even though I set preferences to COM3 in c:\\users\\<me>\\AppData\\Arduino15\\preferences.txt :

\n\n
serial.databits=8\nserial.debug_rate=115200\nserial.line_ending=1\nserial.parity=N\nserial.port=COM3\nserial.port.file=COM3\nserial.port.iserial=null\nserial.show_timestamp=true\nserial.stopbits=1\n
\n\n

I get

\n\n
the selected serial port Failed to execute script esptool\n does not exist or your board is not connected\nBoard at COM3 is not available\n
\n\n

And, in PlattformIO, my platformio.ini contains

\n\n
[env:heltec_wifi_kit_32]\nplatform = espressif32\nboard = heltec_wifi_kit_32\nframework = espidf\nupload_protocol = esptool\n\n; COM3 or COM3\nupload_port = COM[34]\n
\n\n

I tried a different ESP32, of the same model; even attached a BBC Micro:bit, but nothing changed in device manager, even when scanned for hardware changes.

\n\n

Question (finally): how can I get Windows 10 to recognize my IoT boards?

\n", "Title": "How can I get Windows 10 to recognize my IoT boards?", "Tags": "|microsoft-windows|", "Answer": "

My first thought was to delete this question in shame.

\n

Then I thought that I had better answer it, to help anyone else who makes the same silly mistake in future.

\n

To save on manufacturing costs, some USB cables have only two wires, rather than four, and are good only for powering devices, but not for transmitting data.

\n

I had bought some cheap ($1.49) USB cables with power on/off switch from AliExpress and attached one of those to my ESP32. Switched back to the previous cable and all is well.

\n

Exit, stage left, with burning face.

\n

(but, at least not pursued by a bear ;-)

\n" }, { "Id": "4378", "CreationDate": "2019-07-25T11:56:37.387", "Body": "

I have my electricity and gas meters connected to an IHD (In-Home Display), and the IHD is connected to my router. I have access to all of the address of the IHD.

\n\n

I am wanting to get the energy data from the Smart Meters, in order to integrate it within an app.

\n\n

I have experience with Javascript, and Python. So, if there is a way to get the data using those languages, then my life would be easier.

\n\n

IHD: Chameleon IHD6 Technical Overview
\nSmart Meter: ZigBee EDMI

\n\n

Any ideas as to how I could access the data from my Smart Meters?

\n", "Title": "How can I access data from my In-Home Display?", "Tags": "|smart-home|", "Answer": "

UPDATE:

\n

The energy supplier have updated their web app to include a GraphQL query (made in the form of a query parameter in a POST request. It is a client-side query, so all the necessary data is there.

\n

Depending on supplier, your mileage may vary.

\n" }, { "Id": "4380", "CreationDate": "2019-07-25T20:57:10.437", "Body": "

I actually send data packet from LORIOT and TTN to Cayenne.\nI would understand if is possible to send instead data from Loriot or TTN to my custom http web server, if there is any procedure around explaining how to manage this?

\n", "Title": "Can I send messages from LORIOT to an HTTP web server?", "Tags": "|lora|lorawan|https|", "Answer": "

On TTN console:

\n\n

On Loriot console:

\n\n" }, { "Id": "4384", "CreationDate": "2019-07-26T13:08:35.407", "Body": "

At 2:27 in this teaser/promo video for Maker Faire Detroit, there's a brief clip of an aloe plant with some IoT type stuff. Presumably this is from either Maker Faire Detroit 2018, or a previous year.

\n\n

What's the project, and what's its purpose?

\n\n

https://youtu.be/N5bPgrJ0YTw

\n\n

\"Connected

\n", "Title": "What Is This IoT Project from Maker Faire Detroit? Connected Aloe?", "Tags": "|sensors|", "Answer": "

As hinted at in the comments, this is a soil moisture sensor. The demo in a houseplant may be rather trivial, but moisture sensing is one of the more valuable aspects of smart agriculture.

\n\n

Typically you would want to measure soil moisture at multiple depths (depending on the specific crop), and multiple locations (assuming that there is varying solar exposure across a field). This allows you to decide where to use the constrained volume of water available, and make some value trade-offs based on the data (and forecasts).

\n\n

Agricultural IoT does not get as much press as the smart-home applications or more exciting industrial applications, but it is a significant market.

\n" }, { "Id": "4392", "CreationDate": "2019-07-29T20:09:39.143", "Body": "

I am just getting started with PlatformIO and have chosen a Heltec WiFi Kit 32.

\n\n

I loaded the PlatformIO demo espidf-http-request, built it and loaded it, but the serial terminal shows

\n\n
\n

\ufffd\u2404@\ufffd\u2400\u2419\u2414\u2404\u2414\u2400\ufffd\u2400 @\ufffd\ufffd\ufffdH\u2402\u2410\u2400\u2400\u2404\u2400\u2400\u2400\u2400 \u240e0\u2400\u2400\ufffd\u2412\n bd\u240b\ufffd\u2412\u2414Q\ufffd \ufffd\u2400\u2415\ufffd\ufffdbH\ufffdA\ufffdPb(,\u2412\ufffd0\ufffd\u2412P\u2400\u2412\ufffd\ufffd\ufffd\u02102<U\ufffd\u2404\u2402\ufffd \u2404\u2413\u2412\"\u2403\u2400``\u2400\ufffd\ufffdPI\u2405 \"J\u2406I\ufffd\u2404\u2400\ufffd\ufffd\u2400\ufffd2,\u2411$\ufffd\u240cB\u2401\u2404+\ufffd\ufffd\ufffd\ufffd

\n
\n\n

which seems to be a baud rate problem.

\n\n

My platformio.ini contains

\n\n
\n

[env:heltec_wifi_kit_32]
\n platform = espressif32
\n board =\n heltec_wifi_kit_32
\n framework = espidf
\n monitor_port = COM3
\n monitor_speed = 115200
\n upload_port = COM3
\n upload_speed = 115200

\n
\n\n

any idea what I am doing wrongly?

\n", "Title": "Heltec WiFi Kit 32 and PlaftormIO configuration (baud rate?)", "Tags": "|esp32|platform-io|", "Answer": "

Just in case it helps anyone ...

\n\n

74880 baud works and I see output.

\n\n

BUT, I have to Ctrl-c in PlatformIO\u2019s terminal and issue the command\nplatformio device monitor --baud 74880

\n\n

that means that I miss output from the startup sequence.

\n\n

Why does the build not honour the platformion.ini file and its monitor_speed entry? I would expect that to generate some code in the hidden main() function to set the baud rate.

\n\n

Oh, well, it's an imprefect solution, but it' a sort of solution

\n" }, { "Id": "4398", "CreationDate": "2019-07-30T23:49:27.727", "Body": "

The LG Magic Remote has dedicated buttons for the Netflix and Amazon apps, and they are built-in commands in the Logitech Harmony vocabulary, so the command list for the LG Smart TV device includes those two commands, and you can easily assign them to hard or soft buttons, add them to sequences, etc.

\n

But I would like to do the same with apps like YouTube and Hulu that LG didn\u2019t give dedicated buttons to. Since the LG TV\u2019s on-screen interface automatically re-orders its home menu based on usage recency and frequency, there\u2019s no way to start these apps via a single unchanging sequence of directional button presses (short of using the search function, typing in \u201cYouTube\u201d or \u201cHulu\u201d, and using the result, which is much too slow to be practical).

\n

But you can assign nine apps to \u201cQuick Access\u201d buttons, meaning a long hold of any of the number keys 1 through 9. So, for instance, right now I can hold the button 2 to launch YouTube and 3 to launch Hulu. (A long-hold on 0 allows editing of these buttons.)

\n

A single remote button-press seems like it should be easily automatable. But I can\u2019t for the life of me figure out how to use this functionality via Harmony.

\n

I\u2019ve tried creating a \u201ccommand sequence\u201d named YouTube and assigning the 2 button to it, but there\u2019s no way I can see to adjust the duration of the 2 button press so it\u2019s correctly interpreted as a Quick Access command rather than as a number 2.

\n
\n

Note: For what it\u2019s worth, with my Harmony Elite remote, which has a capacitive touch screen instead of a number pad, you can\u2019t hold a number soft button, either\u2014the remote just gives a quick haptic feedback once to let you know it\u2019s sent a single button press. This is unlike the hard buttons, which you can hold down to either:

\n
    \n
  1. cause built-in long-hold commands to happen on the commanded device, just as for the native remote (such as long-holding a skip-back button to go to the beginning of chapter or title), or
  2. \n
  3. tell Harmony to treat a long hold as an entirely separate command. (For instance, in my set-top box layout, I use a single button with a short press for \u201cProgram guide\u201d and a long press for \u201cVideo on demand\u201d, two commands I don\u2019t use that often\u2014and certainly not with a durational component.)
  4. \n
\n

Obviously, perhaps, you can\u2019t use these together\u2014a button programmed with a long-press action in Harmony can no longer send variable-duration commands to the remote device.

\n

Some Harmony remotes have a physical number pad; I don\u2019t know whether they allow sending long-duration number presses to remote devices or not by physically holding those buttons.

\n
\n

I\u2019ve also tried \u201cadding a missing command\u201d, but this requires \u201cteaching\u201d Harmony the command by pointing the IR remote at the Harmony Hub (or, if so equipped, Harmony remote) IR sensor\u2014and it appears that, while the Magic Remote has IR codes that are loaded into the Harmony online vocabulary, it uses Bluetooth to talk to the LG TV itself, so the IR sensor picks up nothing if I point the Magic Remote at it and long-press 2.

\n

(If I had a spare universal remote, perhaps I could teach it the IR for \u201c2\u201d that Harmony already knows, and then use that remote to teach the Harmony what a long-press \u201c2\u201d looks like? But I don\u2019t have a spare learning remote, and don\u2019t want to buy one just for this unless I\u2019m sure it would work.)

\n

Is there some other way I can set it up so that I can command a particular LG TV app to launch via Harmony?

\n", "Title": "Starting LG Smart TV apps (other than Netflix & Amazon) with Harmony", "Tags": "|logitech-harmony|smart-tv|", "Answer": "

I custom mapped the four colour remote buttons to the number pad (1-4) on my Harmony Elite and long press launches the shortcuts fine.

\n" }, { "Id": "4401", "CreationDate": "2019-08-01T10:15:58.263", "Body": "

Somehow I ended up with a mix of smart home devices. What hub/bridge/connector device should I use to connect the ones below?

\n\n

My devices:

\n\n
    \n
  1. Google Home
  2. \n
  3. Google Home - Mini
  4. \n
  5. Xiaomi (Mijia) Gateway v.2 EU-Version
  6. \n
  7. Osram smart socket
  8. \n
  9. Ikea TR\u00c5DFRI remote

    \n\n
      \n+ \n
  10. \n
  11. Arduino
  12. \n
  13. Raspberry Pi
  14. \n
\n\n
\n\n

Can be a device from my list, an app or something else.

\n\n

EDIT: What I want to achieve: Most importantly I would like to control the Osram smart socket (turn it on/off) through the Ikea TR\u00c5DFRI remote and/or the Xiaomi (Mijia) Gateway.

\n", "Title": "How to connect these different smart home devices?", "Tags": "|google-home|ikea-tradfri|xiaomi-mi|bridge|", "Answer": "

You could use the zigbee2mqtt project. It supports a ton of zigbee devices including your remote and socket, as far as I can tell.

\n\n

However, you need an CC2531 USB Stick, which acts as zigbee gateway, plugged into a computer (a Raspberry Pi will do).

\n\n

Control of the devices is via MQTT as the name implies.

\n\n

Nice side-effect: now you can add any kind of supported devices, freely mixing and matching different brands.

\n" }, { "Id": "4409", "CreationDate": "2019-08-02T14:29:57.900", "Body": "

In dell edge gateway how we can change the gpio pin from digital to analog? It has Ubuntu core as OS. It has 8 gpio pins only some of them I want to make digital and some of them analog. I have checked the Dell edge gateway 3001 manual but they have not given anything about it though they mentioned we can make a gpio pin either digital or analog.

\n", "Title": "How to change dell edge gateway 3001 gpio pin to analog", "Tags": "|linux|gpio|energy-monitoring-systems|snappy|", "Answer": "

Not sure if you found the answer you were looking for, but thought I'd actually sign up and give you some pointers since I've just been down this path myself.

\n\n

The tool you need is 'dcc.cctk' which can be used to view and change the mode of each GPIO.

\n\n

For example, the following command would show the current mode of GPIO0

\n\n
    dcc.cctk --adimodechannel1\n
\n\n

You can also change the mode with one of the following options (unused|adcinput|dacoutput|dacandadc|gpio), e.g.:

\n\n
    dcc.cctk --adimodechannel1=adcinput\n
\n\n

Note: You must use 'sudo' or login as root to issue these commands, and you must also reboot the Dell once you have changed the mode before it takes effect.

\n\n

To change the mode of other GPIO pins, you need to use 'adimodechannelX' where X is the channel number between 1-8. GPIO0 is 'adimodechannel1', GPIO1 is 'adimodechannel2', etc.

\n\n

If you need to print help and usage instructions to screen, then use:

\n\n
    dcc.cctk -h\n
\n\n

or for more specific help with this method:

\n\n
    dcc.cctk -h --adimodechannel1\n
\n\n

Hope this helps.

\n" }, { "Id": "4415", "CreationDate": "2019-08-05T06:46:58.177", "Body": "

I'm working on a device powered by the LinkIt Smart 7688 at the moment, and having some driver issues with building a new OpenWRT image from scratch for it. One of those issues is that the WiFi driver blobs don't seem to have been updated for some time (~2017?), and won't work with the current kernel.

\n\n

Has MediaTek stopped supporting the Linkit Smart 7688?

\n", "Title": "Is the LinkIt 7688 Unofficially Unsupported?", "Tags": "|linux|", "Answer": "

Mediatek provide it's own binary wifi driver in its feeds based on old 15.05 OpenWrt version. It seems not updated since years (https://github.com/MediaTek-Labs/linkit-smart-7688-feed). Hard to know it they still support it or not.

\n\n

Luckily, some nice people build a new driver, more open called mt76 (https://github.com/openwrt/mt76). It's included in OpenWrt 18.06 (Kernel modules > Wireless Drivers) and it should work better then Mediatek one. Use last version (18.06.4) as it contains improvement of this driver.

\n" }, { "Id": "4417", "CreationDate": "2019-08-05T09:58:41.887", "Body": "

I am very new to IOT and don't have much knowledge of anything to do with it. I have spent a long time trying to learn more about it but everywhere I go to try to learn about IOT, there is always way too much assumed knowledge that I don't have. My end goal is to be able to develop applications for collecting data from IOT devices and presenting it to a client and I've decided the best way to do this is to get some practical experience. So I signed up to AWS and want to begin using the IOT service to get some idea of how its done. I want to learn how to create applications that collect data from IOT devices but I need some devices to do this with but I don't have any. Is there anywhere that allows me to may be play around with some fake devices or practice devices or something?

\n", "Title": "I am new to IOT and am looking for some platform I can use to learn how to create IOT applications", "Tags": "|aws|", "Answer": "

Welcome. I love your question, and have marked it a favo(ur)ite. However, it will probably be closed as far too broad. If you can narrow down your requirements, you will probably get a good answer (try to avoid GIGO). Here is one of many possible paths that you could take:

\n\n
\n

My end goal is to be able to develop applications for collecting data from IOT devices and presenting it to a client

\n
\n\n

Collect data only (telemetry) or also control the devices (SCADA)?

\n\n
\n

So I signed up to AWS

\n
\n\n

Why? Is AWS a hard requirement, to did you just associate it with IoT and think \u201cwhy not\u201d?

\n\n

You certainly don't need AWS, and I would suggest starting with the simplest system you can think off and working your way into something more complicated.

\n\n

What do you need?

\n\n
    \n
  1. Some hardware to measure something
  2. \n
  3. A way to send the measurements to a client
  4. \n
  5. A way to display the data to a client (presumably a human client?)
  6. \n
\n\n

1) The Raspberry Pi is very popular and has a lot of support an tutorial; it even has its own Stack Exchange site At less than US $10, it has WiFI, BlueTooth and GPIO pins for your sensors (you can also buy Hats with sensors to fit those pins, if you are way of soldering or even breadboards). The Raspberry Pi Zero W is the cheapest with internet connection and should be enough for any IoT project you are considering.

\n\n

Having said that, I personally (and YKmH may vary), I find the Pi to be overkill, as I do not see that you need Linux. If you give more detailed requirements, we can point you at a processor. A good starter would be the ESP32, which you can pick up on AliExpress from about $7, maybe $14 with display. I also like the STM32, which I am about to look into because of its support for Ada 2012, but that might be a tad esoteric for you. Consider also the BBC Micro:bit which is expressly designed for leaners (they gave one to every school kid in the UK), and can be programmed in Python (as can the others I mentioned), or Ada ;-) or in a graphical language.

\n\n

If you want a recommendation of ESP32, take a look at M5stack which is a modular system and allows you to easily combine (stack) devices and accessories (e.g device battery) and has a dozen or more sensors to measure many things. It is cheap (IMO) and easy to get to grips with, and I bought the lot, just to have a play. Once bonus is that you don't need to do any soldering or breadboarding (although you can); everything is plug 'n' play. You might be interested in the wrist \u201cwearable\u201d, which looks pretty cool & which I am using to track employees. See below for image

\n\n

2) So, now you have programmed your device to read some sensor data, and want a way to send it to a server. There an numerous ways to do this, but personally I see it as a choice between developing a RESTful HTTP API, or using something FOSS and off the shelf.\nFor the latter, there are a few options, but MQTT is very popular & well documented, and there are lots of examples. It is also very simple to learn.

\n\n

3) Displaying the data. Again, there is a plethora \u2013 nay, a cornucopia \u2013 of choices for reports. However, the FOSS Node-RED is simple to use and very popular with good support.

\n\n

You mentioned AWS, but that\u2019s just one more thing to confuse yourself with at the start (not to say that you shouldn\u2019t get into it later). Simpler is to run your own server, on your own PC. If you run Linux, you aleady have Apache installed. If Windows, then install something like Xampp which will run an Apache server on Linux and doesn\u2019t require much/any knowledge.

\n\n

That, as I said, is just one of many, many, many, many, many, many, many, many, many, many, many options, but it will get you started & allow you to pick up some other skills along the way.

\n\n

If you do go for ESP32, then, in addition to many great free tutorials, I personally recommend reading Kolban's book on ESP32 and ESP32 Programming for the Internet of Things.\nFor good tutorials, look at Random Nerd and Hackster. As an IDE, I strongly recommend PlatformIO_ over the Arduino IDE, as it supports debugging on many boards, plus lots more, and it has a great support community.

\n\n

\"enter

\n\n

Did I miss any general advice?

\n\n

I encourage others to expand on this answer. While the question is too broad to stay open, an answer would be very useful on the community wiki.

\n" }, { "Id": "4420", "CreationDate": "2019-08-06T12:15:48.370", "Body": "

I need to understand the difference between HTTP and MQTT connection.

\n\n

I want to send data from a Server to a Client without using port forwarding.

\n\n

If I understand, using HTTP is not possible; so with HTTP the client will connect to the server and ask( i.e. GET or POST request ) and receive data from it.

\n\n

But with MQTT is possible the opposite? I mean that the server could send data to the client without having received a request?\nSimply because the client has subscribed to the broker topic?

\n", "Title": "MQTT vs HTTP Connection - Server to Client", "Tags": "|mqtt|", "Answer": "

Yes

\n\n

(I'm not sure what else to say here, what you have described is true).

\n\n

HTTP is a transactional request/response protocol:

\n\n\n\n

* Yes I know you can set the Keep-Alive header and the connection will be kept open while the server waits for another request, but it's still request driven.

\n\n

MQTT is a Pub/Sub protocol

\n\n\n\n

The connection is persistent for the lifetime of the client. Because the connection is initiated by the client (it opens the outbound connection) this works when the client is behind a NAT gateway. As long as the Broker is publicly available then this all works fine.

\n\n

In the situation you have described the \"Server\" would just be another client connected to the Broker that publishes messages that other clients are subscribed to.

\n" }, { "Id": "4432", "CreationDate": "2019-08-10T18:02:39.920", "Body": "

I am trying to figure out how to convert an image to base64 on the ESP32 Cam board so I could then send it to AWS s3 bucket. Could someone please give me some insight?

\n\n

\"camera

\n", "Title": "ESP32 Cam send picture to cloud AWS", "Tags": "|esp32|aws|", "Answer": "

The first problem we have here is: ESP32 may not have enough RAM to do it.

\n\n

There are a few examples on the web on how to save data collected from the ESP32-CAM to a microSD card. If you are familiar with that, you can now take a look on the base64.h library for the ESP32 arduino enviroment. If not, there is a very comprehensive tutorial by Rui Santos on YouTube, which you can find on this link.

\n\n

Base64.h is a library for ESP32 (core) that implements the base64 conversions as simple as decode and encode. You can take a look on the source on github.

\n\n

I never used the ESP32-CAM, but the result you want to achieve seems very simple to me. Have a look on those links and give it a try.

\n" }, { "Id": "4452", "CreationDate": "2019-08-19T07:04:23.090", "Body": "
    \n
  1. In my routine I have Alexa speak a paragraph of text, just 2 sentences. One of them is question, but regardless whether I add a question mark or a period, she reads the the text with the same exact voice. I'd love to have her read the text as question, aka change her tone a bit. Is this possible?

  2. \n
  3. She reads custom text way too fast, any way to slow it down?

  4. \n
\n", "Title": "How to make Alexa change how she reads custom text?", "Tags": "|alexa|", "Answer": "

As far as I can tell, you can't change how Alexa speaks in the \"Say\" action of a routine. This reddit thread seems to agree, saying that you might be able to use homonyms to fix incorrect pronunciations, but there's nothing there to change the speech rate or pitch.

\n\n

Alexa skills can return SSML to change how the speech is generated, for example a skill could return the following to slow down some of the speech:

\n\n
<speak>\n    Hello.\n    <prosody rate=\"slow\">I am now speaking slowly.</prosody>\n    And now I am speaking normally again.\n    Is this cool?\n</speak>\n
\n\n

It's also worth noting that, in skill responses, the documentation states:

\n\n
\n

Alexa automatically handles normal punctuation, such as pausing after a period, or speaking a sentence ending in a question mark as a question.

\n
\n\n

Unfortunately, it doesn't seem like you can use SSML in a routine, so this isn't particularly useful to you. It is odd that question marks aren't respected, but again a normal skill would be able to support that. If it matters a lot to you, you could try to make a custom skill, but that would be much more involved. You might just have to accept the poor speech generation for now, sadly.

\n" }, { "Id": "4474", "CreationDate": "2019-08-28T08:43:23.543", "Body": "

I have at home some ready-made sensors I bought and more custom ones will come, therefore I would like to collect all their data and to set rules/actions based on them.

\n\n

I looked into the various alternatives but, after discovering Node RED, it's not clear to me anymore which overlap there is between OpenHAB, Home assistant and Node RED in regard to data acquisition, storage, display and processing (rules).

\n\n

Initially I checked Home assistant and I saw that it can

\n\n
    \n
  1. collect data with plugins/addons
  2. \n
  3. store it in its own database (round-robin, time-limited)
  4. \n
  5. store it (\"persistence\") on external databases
  6. \n
  7. process data (YAML rules/automations), from what I understand using data in its own database when knowledge of the previous values is required
  8. \n
  9. display data (web GUI)
  10. \n
\n\n

From what I understood, OpenHAB 2 does 1, 3, 4bis, 5: it uses the external database (\"persistence\") also for rules. It appears therefore it is conceptually more efficient, one database is skipped.

\n\n

I thought that Node RED was just a \"graphical rule editor/processor\" compatible with both previous alternatives, but now I discovered that it can do at least 1, 3, 4bis: collect data via a (quite limited) number of input plugins, store the values into an external database, perform rules querying said external database.

\n\n

At this point, is there any remaining difference between Node RED and the other two? What are the conceptual differences and similarities between OpenHAB, Home Assistant and Node RED?

\n", "Title": "What are the conceptual differences and similarities between OpenHAB, Home Assistant and Node RED?", "Tags": "|home-assistant|node-red|openhab|", "Answer": "

As comments point out, the three tools are quite similar in some ways. One point that I can add is that they are also very adaptable ! So the choice also rely on your ability/ease of use with any of these tools.

\n

To answer specifically, from my own experience:

\n\n" }, { "Id": "4489", "CreationDate": "2019-09-04T05:26:32.043", "Body": "

I am completely new to the world of IoT and I want some recommendations. I want to build a home automation system that is accessible from anywhere in the world using internet. The ultimate goal is the ability to access my home devices from around the globe.

\n\n

Over the internet, I have seen many tutorials, but most of them either deal with local networks or third party cloud networks (e.g. AWS or HiveMQ) which charge a fee.

\n\n

Is there a way I can set-up a cloud mqtt broker on my laptop or raspberry pi that is accessible from anywhere in the world? Is there an existing piece of software (node-red?) that can help me with this?

\n", "Title": "Cloud mqtt broker", "Tags": "|mqtt|aws-iot|node-red|", "Answer": "

Refer the below links,these provides extra features on topic level security,authorisation,authentication etc

\n\n\n" }, { "Id": "4516", "CreationDate": "2019-09-17T10:57:14.597", "Body": "

By serendipity I checked the usage of the CoAP via Shodan and to my surprise there is a extreme inbalance between its usage in eastern European countries/China/Russia and the rest of the world, e.g. USA ~300 to Russia ~300000.

\n\n

Is there an explanation for this apparent result or is there indeed some trend in IoT that differs regionally?

\n\n

Update:

\n\n

Thank you for editing @Sean Houlihane.

\n\n

Due the current answers I checked the results again and yes there is some correlation with mobile/phone service providers. China Mobile Guangdong, Rostelecom, China Mobile Shandong are among the top organizations.

\n\n

Because of the open nature of the question I tend to accept Achims answers, since it gives a hint towards local trends and differences.

\n\n

Nevertheless thank you all for the open discussion, since topic could have also been seen as totally off topic.

\n", "Title": "Usage of Constrained Application Protocol (CoAP) in eastern european countries/russia and china", "Tags": "|coap|", "Answer": "

In addition to Achim's comment (and with the same uncertainty as to whether that'd be an answer or a comment), you may want to consider the following:

\n\n

A single installation rolling out, MQTT, would show up as a single (or a small number of) broker(s), while CoAP uses direct connections and thus, usually in some setups, has each node discoverable individually. A single CoAP roll-out by, say, an internet provider that uses CoAP for managing their modems, could thus easily show up as 100.000s of instances. Thus, the 300k devices in Russia or China could plausibly stem from a handful of installations, where no such large installations have happened in the USA, without any statistical significance.

\n\n

This is not to say that there is no imbalance between CoAP adoption in the countries, only that those vast reported numbers may not necessarily support that conclusion.

\n" }, { "Id": "4517", "CreationDate": "2019-09-17T11:39:46.130", "Body": "

I am using this MQTT-SN bridge, with no changes except that I rigged it to give a hex dump of the buffer if it does not contain a valid MQTT-SN packet.

\n\n

I am experiencing a strange problem: sometimes I find the bridge works perfectly well, and other times I find the problem I describe below. I have an ESP32 microcontroller which is just publishing a message here and there to say it's alive. It queries various sensors (humidity, dust ...) as well, and publishes the results also, as you will see. It's writing to a LoRa module, and correspondingly I have a LoRa module on a USB stick which is plugged into my laptop. Then I run

\n\n
./mqtt-sn-serial-bridge -b 19200 /dev/ttyUSB0\n
\n\n

to see output like the following:

\n\n
2019-09-17 12:20:18 ERROR Error reading rest of packet from serial port: 0, 22\n2019-09-17 12:20:19 WARN  Read 10 bytes but packet length is 165 bytes.\n....|.....  0xa5 0x08 0xa0 0xa6 0x7c 0x84 0x85 0x0c 0x86 0x85 \n2019-09-17 12:20:19 WARN  Read 17 bytes but packet length is 37 bytes.\n%..@..H.........f   0x25 0xc5 0x0c 0x40 0x87 0x84 0x48 0x04 0x85 0x9c 0xc4 0x85 0x88 0xc4 0x86 0xee 0x66 \n2019-09-17 12:20:21 WARN  Read 8 bytes but packet length is 40 bytes.\n(.....xD    0x28 0xa5 0x04 0xc0 0xa2 0x87 0x78 0x44 \n2019-09-17 12:20:23 WARN  Read 8 bytes but packet length is 67 bytes.\nC.......    0x43 0xa5 0x0a 0xc0 0x82 0x0a 0x06 0xf6 \n
\n\n

If I run screen /dev/ttyUSB0 19200 8n1 I get something like this:

\n\n
bf5fish5 is a live\"b9H0.000000 relative humidity\n                                                bf5PM2.5 = 10bf5PM10 = 11bf5fish5 is a live\"b9H0.000000 relative humidity\n                                                                                                                         bf5PM2.5 = 10bf5PM10 = 11\n
\n\n

What I found remarkable is that while screen can get human readable characters, the MQTT-SN bridge just seems to get garbage.

\n\n

Does anyone have any experience with this bridge? I'm unsure whether the problem is with the bridge itself or somehow with the LoRa substrate. Or can anyone see where I'm going wrong?

\n", "Title": "What is wrong with this MQTT-SN bridge?", "Tags": "|mqtt|lora|", "Answer": "

Baud rate

\n
\n

What I found remarkable is that while screen can get human readable characters, the\nMQTT-SN bridge just seems to get garbage.

\n
\n

This turned out to be the important part of tracking down the problem. GNU screen set the baud rate, but mqtt-sn-serial-bridge wasn't doing so. That turns out to be because

\n
$ ./mqtt-sn-serial-bridge -?\nUsage: mqtt-sn-serial-bridge [opts] <device>\n\n  -b <baud>      Set the baud rate. Defaults to 13.\n  -d             Increase debug level by one. -d can occur multiple times.\n  -dd            Enable extended debugging - display packets in hex.\n  -h <host>      MQTT-SN host to connect to. Defaults to '127.0.0.1'.\n  -p <port>      Network port to connect to. Defaults to 1883.\n  --fe           Enables Forwarder Encapsulation. Mqtt-sn packets are encapsulated according to MQTT-SN Protocol Specification v1.2, chapter 5.5 Forwarder Encapsulation.\n
\n

the option -b doesn't take a baudrate for its argument. See, its default is 13, quite a strange baudrate, isn't it. It takes a number, which gets handed over to some lower level API that sets the baudrate. On my system, these numbers, and their corresponding baudrates are:

\n
 0:      0\n 50:     1\n 75:     2\n 110:    3\n 134:    4\n 150:    5\n 200:    6\n 300:    7\n 600:    8\n 1200:   9\n 1800:   10\n 2400:   11\n 4800:   12\n 9600:   13\n 19200:  14\n 38400:  15\n 57600:  4097\n 115200: 4098\n 230400: 4099\n
\n

So you can use -b 13 to set the baudrate to 9600. Or use -b 14 to set the baudrate to 19200. This was really unintuitive to me, so [I forked the project][1] to make sure that -b takes the actual baud rate But, this has been fixed since.

\n" }, { "Id": "4529", "CreationDate": "2019-09-19T12:42:00.793", "Body": "

I have an old device, that's located in a place where I don't have an easily accessible data connection. What I do today is I drive to where this device is located and plug a db9 rs232 serial connector to this device and turn on a laptop that has this device old Win32 software to download the latest acquired data points, clean the memory of the device, and let it keep logging data points. In two weeks, I need to drive again and do all of this again.

\n\n

I would like to connect to this device rs232 port remotely, ideally creating a virtual serial com port on the computer and run the software I usually run locally, remotely. This Win32 software can also be run on Wine if needed without problems.

\n\n

When looking for remote access to rs232 serial ports, I could only find for connecting through WiFi or other local networks, but I would like to access through internet. If this can be achieved using Android devices, it's even better. Is there any solution available for this problem?

\n", "Title": "Accessing legacy device with RS232 over 4G Internet", "Tags": "|data-transfer|mobile-data|", "Answer": "

I've worked with RTU's (remote terminal units), this is a common requirement.

\n\n

Maybe a simple modem RS232/4G is enough GTM201, I've used those in the past.\nDepending on your specific needs you'll need to check for devices which suits better.

\n\n

I've also used a Robustel solution. I've used one similar to M1200.

\n\n

Moxa could also have options.

\n\n

You'll have to search for \"serial gateways\". Creating one with a RPi, USB 2 RS232 converter and 4G modem is also an option.

\n" }, { "Id": "4533", "CreationDate": "2019-09-21T14:32:19.940", "Body": "

I have a couple of dimmable, but non-colored, IKEA lights. I have learned that the proper way to factory reset them is to turn them off and on again 6 times quickly. That worked for Tr\u00e5dfri lights I had of a different model.

\n\n

However, when I do this to these non-colored lights, they don't reset. I noticed that they don't seem to go out fully if I do it too quickly (unlike the other models), but neither waiting for them to light up and dim out fully nor ignoring that and just doing it quickly worked. What should I do?

\n", "Title": "How to factory reset non-colored IKEA Tr\u00e5dfri bulbs?", "Tags": "|smart-home|smart-lights|ikea-tradfri|", "Answer": "

The Mozilla IoT wiki made note of this exact problem.

\n\n
\n

If you have problems resetting the plain white (non-colour) IKEA bulb, try making the \"ons\" very brief (less than a second) and the \"offs\" longer

\n
\n\n

This solution worked well for me, and I was able to reset them without much trouble after trying it.

\n" }, { "Id": "4538", "CreationDate": "2019-09-22T14:46:23.810", "Body": "

ESP8266 provides an amazing environment to build iot devices for smart home. I managed to build all kinds of gadgets raging from simple temperature and humidity recorders to more advanced actors with sensor motions and appliance controls. There are tons of components for those available.

\n\n

Recently, I entered into the area of automation of heating in smart house, discovered the homematic ip a line of products and got impressed how energy efficient and what an impressive operating range they have.

\n\n

My current setups involve many nodemcu's (as well as off-the-shelf homematic and zigbee devices) on one side and raspberry pi 3 with openhab integrating those into one thingy on the other side.

\n\n

I got these questions:

\n\n
    \n
  1. I wonder how steep is the learning curve to get into the zigbee/zwave development.
  2. \n
  3. Thanks to nodemcu, one could start developing and testing the setups within days. Is there any similar inexpensive development boards are there? I tried to browse the net a bit, but all I got were kits costing at least a few hundred euro/dollar.
  4. \n
  5. Can one reuse the same sensor components to work with zigbee/z wave boards?
  6. \n
\n", "Title": "z wave and zigbee development boards", "Tags": "|esp8266|zigbee|zwave|", "Answer": "

Some additional resources on the net how to connect arduino&co with xbee.

\n\n

ForceTronics blog:

\n\n\n" }, { "Id": "4539", "CreationDate": "2019-09-22T20:37:34.507", "Body": "

I need an advice on sort of basic home automation. Let's say, I have 3 multimedia points in my home. 2 of them are TVs with attached recievers, players and game consoles, almost everything is network-attached. The other one is just a stereo reciever with a vynil turntable and RPi as a network audio player.

\n\n

Right now everything is quite messy, because I need to use 3-5 input devices (remotes, gamepads etc) to operate just one point.

\n\n

It would be cool to implement kind of Universal Remote thing which I would be able to use from my android phone or tablet, or from a laptop. I need some zone division of course. Also I would like to have layouts with some basic actions like Vol+, Vol- or PwrBtn, but also some chained actions, like \"let's play a movie\": turn on TV, switch to that input, turn on reciever, switch to that input, turn on player, start an android remote app for it etc.

\n\n

Is there any platform on which it wouldn't be too painful to implement? As of hardware, I have a dedicated PC which I can use as a server, I have a couple of zmote remotes and a couple of Microsoft IR blasters/recievers. I'm an IT geek with moderate programming skills, so I'm not limited to just consumer only solutions.

\n", "Title": "Advanced Universal Remote or basic home automation", "Tags": "|smart-home|android|infrared|", "Answer": "

I'll answer myself. Home Assistant is just enough to implement almost anything I've wanted. I've tried OpenHAB and it was waaaay too long until I was able to automate anything. Also I got really confused with it's Dashboards, doesn't look like an easy to enter solution.

\n\n

On the other hand, I gave a shot to Home Assistant and within just a weekend I've already added all my devices I have and willing to automate. Dashboards are quite messy too, but not that hard to use. And I didn't had to write a line of python or whatever code, simple YAML configs at most.

\n" }, { "Id": "4542", "CreationDate": "2019-09-24T05:25:01.710", "Body": "

Looking for some API docs on querying & control of Belkin Wemo switches/plugs.

\n\n

I have discovery working but can't seem to find any concise documentations describing how to do much else.

\n\n

I can see discovery events & process the xml details:

\n\n
HTTP/1.1 200 OK\nCACHE-CONTROL: max-age=86400\nDATE: Sat, 21 Sep 2019 05:11:35 GMT\nEXT:\nLOCATION: http://[IP]:49153/setup.xml\nOPT: \"http://schemas.upnp.org/upnp/1/0/\"; ns=01\n01-NLS: [ID]\nSERVER: Unspecified, UPnP/1.0, Unspecified\nX-User-Agent: redsonic\nST: urn:Belkin:service:basicevent:1\nUSN: uuid:Lightswitch-1_0-[ID]::urn:Belkin:service:basicevent:1\n
\n\n

What do I do with these? :

\n\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<serviceList>\n    <service>\n        <serviceType>urn:Belkin:service:WiFiSetup:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:WiFiSetup1</serviceId>\n        <controlURL>/upnp/control/WiFiSetup1</controlURL>\n        <eventSubURL>/upnp/event/WiFiSetup1</eventSubURL>\n        <SCPDURL>/setupservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:timesync:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:timesync1</serviceId>\n        <controlURL>/upnp/control/timesync1</controlURL>\n        <eventSubURL>/upnp/event/timesync1</eventSubURL>\n        <SCPDURL>/timesyncservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:basicevent:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:basicevent1</serviceId>\n        <controlURL>/upnp/control/basicevent1</controlURL>\n        <eventSubURL>/upnp/event/basicevent1</eventSubURL>\n        <SCPDURL>/eventservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:firmwareupdate:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:firmwareupdate1</serviceId>\n        <controlURL>/upnp/control/firmwareupdate1</controlURL>\n        <eventSubURL>/upnp/event/firmwareupdate1</eventSubURL>\n        <SCPDURL>/firmwareupdate.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:rules:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:rules1</serviceId>\n        <controlURL>/upnp/control/rules1</controlURL>\n        <eventSubURL>/upnp/event/rules1</eventSubURL>\n        <SCPDURL>/rulesservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:metainfo:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:metainfo1</serviceId>\n        <controlURL>/upnp/control/metainfo1</controlURL>\n        <eventSubURL>/upnp/event/metainfo1</eventSubURL>\n        <SCPDURL>/metainfoservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:remoteaccess:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:remoteaccess1</serviceId>\n        <controlURL>/upnp/control/remoteaccess1</controlURL>\n        <eventSubURL>/upnp/event/remoteaccess1</eventSubURL>\n        <SCPDURL>/remoteaccess.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:deviceinfo:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:deviceinfo1</serviceId>\n        <controlURL>/upnp/control/deviceinfo1</controlURL>\n        <eventSubURL>/upnp/event/deviceinfo1</eventSubURL>\n        <SCPDURL>/deviceinfoservice.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:smartsetup:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:smartsetup1</serviceId>\n        <controlURL>/upnp/control/smartsetup1</controlURL>\n        <eventSubURL>/upnp/event/smartsetup1</eventSubURL>\n        <SCPDURL>/smartsetup.xml</SCPDURL>\n    </service>\n    <service>\n        <serviceType>urn:Belkin:service:manufacture:1</serviceType>\n        <serviceId>urn:Belkin:serviceId:manufacture1</serviceId>\n        <controlURL>/upnp/control/manufacture1</controlURL>\n        <eventSubURL>/upnp/event/manufacture1</eventSubURL>\n        <SCPDURL>/manufacture.xml</SCPDURL>\n    </service>\n</serviceList>\n
\n", "Title": "Belkin Wemo API?", "Tags": "|wemo|", "Answer": "

Credit to @hardillb for pointing me in the right direction. Here's a basic message structure for control:

\n\n
POST /upnp/control/basicevent1\nSOAPACTION: \"urn:Belkin:service:basicevent:1#SetBinaryState\"\nContent-Type: text/xml; charset=\"utf-8\"\nAccept: */*\nUser-Agent: PostmanRuntime/7.15.2\nCache-Control: no-cache\nHost: [DEVICE_IP]:49153\nAccept-Encoding: gzip, deflate\nContent-Length: 306\nConnection: keep-alive\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\">\n    <s:Body>\n        <u:SetBinaryState xmlns:u=\"urn:Belkin:service:basicevent:1\">\n        <BinaryState>0</BinaryState>\n        </u:SetBinaryState>\n    </s:Body>\n</s:Envelope>\n\nHTTP/1.1 200\nstatus: 200\nCONTENT-LENGTH: 289\nCONTENT-TYPE: text/xml; charset=\"utf-8\"\nDATE: Tue, 24 Sep 2019 13:47:09 GMT\nEXT:\nSERVER: Unspecified, UPnP/1.0, Unspecified\nX-User-Agent: redsonic\n\n<s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\">\n    <s:Body>\n        <u:SetBinaryStateResponse xmlns:u=\"urn:Belkin:service:basicevent:1\">\n            <BinaryState>0</BinaryState>\n            <CountdownEndTime>0</CountdownEndTime>\n            <deviceCurrentTime>1569333003</deviceCurrentTime>\n        </u:SetBinaryStateResponse>\n    </s:Body>\n</s:Envelope>\n
\n" }, { "Id": "4591", "CreationDate": "2019-10-09T06:36:04.280", "Body": "

As a follow-up from Can I retrieve a Python program from an ESP32?, I would like to make things a little more difficult for those who would steal my code (I do realize that I cannot prevent it).

\n\n

Can I compile Python on my PC, load it to an ESP32 and run it?

\n", "Title": "Can I compile Python on my PC, load it to an ESP32 and run it?", "Tags": "|esp32|python|", "Answer": "

Based on this forum post, it is possible to include precompiled scripts (in a modules directory at build time) in the flash image. This requires the config FROZEN_MPY_DIR.

\n\n

It is also possible to cross-compile using mpy-cross, but it looks like this requires some micropython source code changes (#define MICROPY_PERSISTENT_CODE_LOAD (1)), and also what looks like a bugfix in emitglue.c.

\n\n

Although you don't save much code space, it looks like it is also possible to omit the compiler - although really I think simpler would be to omit the repl so there is no trivial software interface to the hardware.

\n\n

The reference used was 2 years old, so there is a good chance that the state-of-the-art has moved on since then.

\n" }, { "Id": "4593", "CreationDate": "2019-10-09T10:12:44.590", "Body": "

Can I interface with my server's RESTful HTTPS API directly over 802.15?

\n\n

Or do I need a gateway?

\n\n

Any good overview/tutorial for 802.15 (not too in depth, just ot get an overview)?

\n", "Title": "(how) can 802.15 access the internet?", "Tags": "|https|802.15|", "Answer": "

You need a bridge/gateway.

\n\n

From what little I remember from looking at LPWAN Zigbee networks a LONG time ago and assuming I've not got the wrong end of the stick...

\n\n

Something that will change the physical layer of the network (Zigbee to probably ethernet/wifi) and possibly do NAT (link-local IPv6 to either global unique IPv6 or IPv4)

\n\n

I also used to run Bluetooth (classic) PAN that used my phone (Sony T68i iirc) as the gateway to provide internet to my PDA (in the golden years before smart phones). Bluetooth PANs would support up 8 devices and handed out private IPv4 addresses and NATed to the rest of the world.

\n" }, { "Id": "4602", "CreationDate": "2019-10-11T13:41:37.747", "Body": "

I would like to send a location based push notification to all devices that enter a range of some kind of IOT device.

\n\n

for example I would have a device which sends push notifications to any phone that enters range of the device. Similar to location based push notifications.

\n\n

How could you do this? Is it possible?

\n", "Title": "How can you send location based push notifications to all android/ios devices that enter range of an IOT device", "Tags": "|android|ios|", "Answer": "

This is exactly what BLE beacon technology does.

\n\n

It requires an app* on the device to tell the phone to listen for the broadcasts and then act accordingly when it receives one.

\n\n

Beacons have a normal range of approximately 10m

\n\n

* An app is required because otherwise this would be a truly horrific way to force advertising on people. Before Google gave up on the idea (Eddystone Beacons) Android had a reasonably nice compromise where the beacons would push a URL pointing to a website that would appear in the notification area (without out any noise or vibration).

\n" }, { "Id": "4611", "CreationDate": "2019-10-15T23:10:41.120", "Body": "

Wireless sensors communication have limitation in message size due to low power consumption.

\n\n

Assume that we have a heart rate sensor sending data via BLE, or a GPS data sending via LoRa to a receiver: the message size is a few bytes.

\n\n

If we use standard encryption algorithms, the message size will be increased tremendously to hundreds of bytes. Also, the encryption and decryption will use much more power during processing.

\n\n

So, there should be trade-off for that.

\n\n

What are the best practices to encrypt/decrypt the tiny data?

\n", "Title": "How to encrypt sensor data in low-power wireless communication like LoRa, BLE, ...?", "Tags": "|wifi|lora|bluetooth-low-energy|nb-iot|", "Answer": "

You do not have to increase message size if you pick the right standard encryption algorithm for your application. You can check how it's done in the LoRaWAN protocol and apply the same methodology to any PHY protocol you want, LoRa, FSK or any other kind of modulation.

\n\n

LoRaWAN uses AES in CTR mode for encryption, and this mode doesn't add any overhead (if payload is 1 byte, encrypted payload is 1 byte). For authentication, LoRaWAN uses CMAC to generate a 4-byte signature. This is probably acceptable overhead in all but the most resource-limited application. All those cryptosystems are based on symmetric keys. To avoid common attacks, a long term secrete symmetric key (specific to each device) is used to generate session keys that are negotiated in a secure way between both end of the link.

\n\n

For BLE, if you want a smartphone to be able to understand heart-rate data (a value that is part of a standard BLE profile), you have no other option than following the BLE standard and using the security that is build in the protocol.

\n" }, { "Id": "4612", "CreationDate": "2019-10-16T03:48:30.587", "Body": "

So I happen to know some python to turn lights on and off in rpi3. Also to automate my home I know how to use relay boards. But the problem is that I have one rpi3 and the light which I want to control is away from it. So radio seemed to be the next option. I am a newbie in this. How to replace a traditional on/off switch with with RF receiver which does certain actions when the rpi3 sends a particular single. I don't know how to setup this particular thing on the rpi3 side and the switch side. Please help me out here. So pls recommend the switch I should use

\n", "Title": "How to control an on/off switch with raspberry Pi 3(perhaps using Radio)", "Tags": "|raspberry-pi|smart-lights|", "Answer": "

There has been a lot going on with using MQTT in home automation. Check out the zigbee2mqtt project that allows you to control commercial home automation devices like the ikea TR\u00c5DFRI Wireless control outlet with your own server.

\n" }, { "Id": "4617", "CreationDate": "2019-10-17T14:45:32.047", "Body": "

My raspi knows when EDT (Eastern US Daylight Time) is active,

\n\n
HypriotOS/armv7: pirate@black-pearl in ~\n$ date\nThu Oct 17 10:27:27 EDT 2019\n
\n\n

the docker container for homeassistant knows when EDT is active,

\n\n
root@black-pearl:/# date\nThu Oct 17 10:28:54 EDT 2019\n
\n\n

but I cannot manage to get homeassistant v.0.100.2 to reflect this.

\n\n

I've used

\n\n
time_zone: EST\n
\n\n

which does display the correct time if I were NOT in daylight savings.

\n\n

Here's what else Ive tried:

\n\n
    \n
  1. passing in the hosts time, which clearly works as evidenced by the docker containers console

    \n\n
    -v /etc/localtime:/etc/localtime\n\n-v /etc/timezone:/etc/timezone:ro\n
  2. \n
  3. Leaving blank in the hopes that freegeoip will solve this for me.

  4. \n
  5. Using time_zone: EDT instead of EST but it isn't recognized as a valid config.

  6. \n
\n\n

Ultimately I'm not above lying to hass and telling it we're in America/Moncton (UTC - 4) and manually switching it back to EST (UTC - 5) the night before our transitions, but thats not really in the spirit of automation so I'd rather not.

\n\n

Any help?

\n", "Title": "Homeassistant Timezone Sync", "Tags": "|home-assistant|", "Answer": "
America/New_York\n
\n\n

SOLVED

\n\n

this worked. im a dingus. if i knew how to read and FOLLOW DIRECTIONS i would have read to use tz_database_name instead of its common abbreviation.

\n\n

credit to reddits u/kb5zuy

\n" }, { "Id": "4621", "CreationDate": "2019-10-20T18:07:01.010", "Body": "

Is it possible to create an IFTTT widget which can turn on a LIFX light and fade it off over a specific time?

\n\n

I had thought configuring a button to turn the light on then set a fade duration using the advanced settings would work. Nope.

\n\n

Then I tried creating two web hooks which fire on receipt of an event. One turns the light on, the other fades off. Doesn\u2019t work either.

\n\n

There does not appear to be a schedule service or timed trigger, so I\u2019m at a loss.

\n", "Title": "Switch on, Fade off LIFX", "Tags": "|ifttt|", "Answer": "

The final, whole, answer is this:

\n\n

I created a Location trigger in IFTTT to fire a webhook action. The webhook action calls a script located at https://script.google.com/home. This script looks like this:

\n\n
function doGet(e) {\n  ScriptApp.newTrigger(\"notify\")\n  .timeBased()\n  .after(5 * 60 * 1000)\n  .create();\n}\n\nfunction notify() {\n  UrlFetchApp.fetch(\"https://maker.ifttt.com/trigger/<my event name>/with/key/<my ifttt webhook key>\");\n}\n
\n\n

The script sets up a 5 minute timer, which calls the second function in the script, which calls the IFTTT maker URL with an event name.

\n\n

I then setup a webhook trigger which listens for the event and performs the required action, which turns off the light.

\n\n

The light is turned on, initially, by a simple IFTTT Location trigger.

\n\n

Some useful info: The script needs to be published as a web app to be called. The whole process of setting all this up required multiple steps of allowing google/ifttt to call each other, etc. Any time a change is made to the script, it needs to be re-published as a new version or it will not take effect.

\n" }, { "Id": "4633", "CreationDate": "2019-10-25T12:50:40.140", "Body": "

My company use sensors on doors to track activity and access control on the doors for many sites throughout the UK. We need to log this activity by sending UDP packets from the sensor's receiver to the public server via a 4G router where the data will be received and access control logs will be saved.

\n\n

We had a solution were the receiver was programmed to send the data to the public servers static IP address and the router was set as the default gateway, therefor when the server was not found on the network it would be sent to the router and then sent to the server. However this means that if we changed server IP address we would have to reprogram every receiver we use in the UK on site which is not good practice.

\n\n

Because of this we need a solution that comes from a routing rule of sorts that sends all data from the receiver to the public IP address of the server so that we can change the routing config remotely.

\n\n

Any help would be appreciated

\n", "Title": "how to network a receiver to a public server", "Tags": "|networking|sensors|", "Answer": "

I think you are approaching the problem from the wrong direction.

\n\n

DNS is the known proven solution to decoupling the IP address/physical machine a service is provided on. Trying to re-invent a solution to this problem at a network routing level is not the right way forward.

\n\n

Secondly using UDP packets over the general Internet to try and deliver data that should not be lost (if it can be lost, what's the point of trying to collect it in the first place?). Also given the context it is possible that this data may be potentially sensitive in nature. Unless the content is encrypted and also contains things like timestamps then there is nothing to stop an adversary replaying or faking messages as well as spoofing UDP src addresses is trivial (see the majority of DDoS attacks).

\n\n

Firewalls can be set up to only allow traffic that is initiated from with in the network and to only allow access to trusted srcs for things like DNS

\n\n

I suggest you look at the following:

\n\n
    \n
  1. Running a VPN on your router/gateway. This means all traffic is encrypted and will terminate inside your network.
  2. \n
  3. Look at getting a custom/private APN (e.g. from O2) from your Cellular network provider. Again this means that all the traffic will terminate within your network. While not encrypting the traffic it is not exposed to the Internet and you don't have to run the receiving server exposed to the world.
  4. \n
\n" }, { "Id": "4637", "CreationDate": "2019-10-26T20:25:18.413", "Body": "

Does anyone have a simple \"hello world\" example of writing text to the display of a ESP-WROVER-KIT-VB ?

\n\n

My search skills are failing me today.

\n", "Title": "ESP-WROVER-KIT-VB - seeking code example to write to display", "Tags": "|esp32|", "Answer": "

Read this It worked for me; bigly

\n" }, { "Id": "4639", "CreationDate": "2019-10-28T13:37:00.777", "Body": "

I just read

\n\n
\n

Bluetooth Direction Finding, added to the Bluetooth spec as part of Bluetooth 5.1 to allow devices to measure angle of arrival and angle of departure (AoA and AoD) to position devices to an accuracy of under one metre (around 3.3 feet.)

\n
\n\n

(read it here).

\n\n

I am excited about the idea, but want to be sure that I have understood it.

\n\n

If I want to track people/objects and report their locations to a server, I imagine that I need a BLE Gateway/router.

\n\n

I assume that the devices would send advertising packets, the gateway would receive these, detect the device location and send it via HTTP(S) to the server.

\n\n

I also assume that only the BLE gateway/router needs to be BLE 5.1, since it is the one performing direction finding.

\n\n

Are my assumption correct? Does it work that way? Can a BLE 5.1 gateway perform direction finding on a BLE 4.0 advertisement?

\n\n

I welcome any more info, such as tutorials, videos, books, etc

\n", "Title": "Bluetooth Direction Finding - do I understand it correctly?", "Tags": "|bluetooth-low-energy|direction-finding|", "Answer": "

The way Bluetooth direction finding works is described in this document.

\n\n

There are two scenarios:

\n\n\n\n

As you see, in both scenarios, there are requirements on both ends. Some are definitely hardware upgrades (the phased array antenna). The rest (IQ sampling and sending the direction finding signal) may be possible with just firmware / software upgrades, depending on the exact implementation, though I kind of doubt it (it would also most likely require at least a change in the BLE chip firmware, not just a high-level OS/app change).

\n\n

So, as far as I understand it at this point, you need BT 5.1 equipment at both ends to make it work.

\n\n

I'd love to hear otherwise!

\n" }, { "Id": "4656", "CreationDate": "2019-11-07T17:06:01.020", "Body": "

I would like to build a network consisting of 5 smartphones (preferably Android, if it makes any difference) that log synchronized accelerometer data into a computer (like a data-logger). How can I achieve this? Any help would be appreciated.

\n\n

Some more details:\n- the preferred sampling frequency is 50Hz. Depending on the quality of the accelerometers, this high frequency might not make sense. However, a sampling frequency below 10 Hz would not be useful.\n- Data from all accelerometers needs to be synchronized. The logged data should preferably be in the following format: Time Acceleration11 Acceleration12...Acceleration53;\nWhere Acceleration11 is acceleration in device 1, direction 1 (say, X-direction).

\n\n

I am a newbie in electronics and IoT, so please consider this fact in your answer. For instance, please avoid using abbreviations without explaining them and please provide some guidance how to practically achieve what you are suggesting.

\n", "Title": "How can I build a network of 5 smartphones that log accelerometer data into a computer?", "Tags": "|sensors|data-transfer|interoperability|", "Answer": "

Synchronize data from different devices is always a hard task. In your favor, you are using smartphones. You can manage to have accurate microsecond precision on your timestamps in this scenario.

\n\n

First of all, we need to think about the communication. The easier way I could think, that would require the least programming and hardware works like this:

\n\n
    \n
  1. The phones are running an app which samples the accelerometer data, timestamp them with the best precision you can and then HTTP request them to a webserver.
  2. \n
  3. The webserver consists of a RPi running LAMP. A PHP script receives the request, and place the data in a table using mySQL.
  4. \n
  5. Your front end consists of a simple webpage on which javascript gets the data from the database and do the according processing on it.
  6. \n
\n\n

Now let's talk about #1.

\n\n

In my opinion, you have two ways of timestamping your data. The easy way and the hard way:

\n\n

The simplest way is to use a external source, such as NIST. Another very simple way is to use your GPS timestamps, which are pretty acurate.\nThe hard way is to go for clock sync. These are hard to achieve.

\n\n

But,

\n\n

there is a tool to help you with that.

\n\n

The webserver is pretty straightforward. Get a RPi, go for LAMP, you are done. PHP simple server with mysqli and you are done. The internet is full of tutorials.

\n\n

Once you have your data in the database and properly timestamped according to your precision, parse it and do whatever you need with that. If the timestamp matches, there is your synchronized data.

\n\n

Please note that this is a very simple approach with some extensive drawbacks depending on how precise the synchronization should be, But this may give you a head start for the task.

\n" }, { "Id": "4675", "CreationDate": "2019-11-14T02:12:12.177", "Body": "

With Alexa I can change default shopping list and todo app by settings.

\n\n

I'm using AnyList with Alexa, so can I also use AnyList with Google home?

\n", "Title": "How to change default shopping list app with google home", "Tags": "|google-home|", "Answer": "

AnyList has been updated to work with Google Assistant now, I use it myself. You just need to go into the settings for Google Home and change the default list to AnyList.

\n\n

https://help.anylist.com/articles/enable-google-assistant/

\n\n
\n

Set a default list service

\n \n

Let your Google Assistant create and edit lists in a service, like Google Keep.

\n \n

On your Android phone or tablet, open the Google Assistant app Assistant (iOS)\n At the top right, tap your profile image or initial and then Services and then Notes & Lists.\n Tap a service to set it as your default.\n To confirm, tap Continue.\n Any new lists created with your Google Assistant will be visible on your default list service. Any lists you created with your Google Assistant before you set a default list service, won\u2019t be available on that service.

\n \n

Important: If you choose a third-party provider app as your default, that provider\u2019s app's privacy policy applies.

\n
\n" }, { "Id": "4696", "CreationDate": "2019-11-21T03:21:17.263", "Body": "

I have a spreadsheet of about 150 Mosquitto log entries. It is paired with 30 messages I published, matched in time. That is, there are about 5 times as many Mosquitto entries as published items. Many seem to indicate that things I published timed out. Others say "Socket error". But all the subscribers got the messages.

\n

My problem is, after publishing 2 or 3 per hour for 2 or 3 days, suddenly none of the subscribers get anything I publish. But the log "looks the same" as far as I can tell. But there are thousands of entries.

\n

EDIT:\nThe link will bring up my spreadsheet. I edited it to just 44 rows. There are two vertical sections. Messages I published on the left and Mosquitto log entries on the right. Sorry, when I pasted them into the spreadsheet the delimiter characters spread them across many columns. The first 3 or so pubs seem fine but then the 12:13:26 pub has a socket error although the subscriber received the pub. After that things seem to get worse with timeouts also.

\n

Mosquitto Capture Short

\n

The rest of the 100 or so rows (that I didn't include) are very similar. Timeouts and socket errors. But the subscribers get the pubs.

\n

The pubs come from a C program I wrote that runs on my Raspberry Pi. The subscribers are items in OpenHAB that is also on the same RPi.\nHere is the C code that does the publishing:

\n
void publish(char *Topic, char *action)\n{\n    mosquitto_lib_init();\n    mosq = mosquitto_new(NULL,true,NULL);\n    mosquitto_loop_start(mosq);\n    mosquitto_connect_async(mosq,MQTT_Host,MQTT_Port,1);\n    mosquitto_publish(mosq,NULL,Topic,strlen(action),action,2,false);\n    printf("Mosquitto Sending: %s %s to %s:%d\\n",Topic,action,MQTT_Host,MQTT_Port);\n}\n
\n", "Title": "Is There A Description For the Mosquitto Log Entries?", "Tags": "|mqtt|mosquitto|", "Answer": "

A spreadsheet really isn't the best way to work with these logs, it makes them really hard to read properly. Next time please just post the text and use the {} option in the tool bar to format it as code.

\n\n

There is nothing too obviously wrong in the log entries you've provided.

\n\n

That said, the C code worries me a lot, it looks like you are creating a new MQTT client for every publish event and then not shutting them down or cleaning up the resources properly after. You are very likely leaking client structures and network threads. These are the sort of things that will cause things to crash at some point.

\n\n

Firstly assuming the application that is publishing the messages is long running it should create a single MQTT client object and reuse it over the life time of the application.

\n\n

This means the initialisation of the library, the creation of the client, the starting of the network loop and the connection should all be moved outside the publish function and the mosq should be variable this is in a more global scope so it can be accesses later.

\n\n

Also using 1 second for a KeepAlive value is just going to generate a huge amount of extra load on your broker (especially for a client that you are currently leaking and not cleaning up). A better value would be 60.

\n\n

You can see the KeepAlive value in the logs in the connect line:

\n\n
12:23:31pm New client connected from 192.168.1.115 as mosq-0g6AG1QuASdsrJZqAz (p2,  c1, k1).\n
\n\n\n\n

p.s. No there is no description of the log output except the src code

\n" }, { "Id": "4708", "CreationDate": "2019-11-29T01:38:43.990", "Body": "

I bought an orange pi zero plus and want to make a wifi access point that passes all traffic of the connected devices through tor.

\n\n

Simply I wanted a dev board that has gigabit ethernet and wifi for access point, and is as cheap as possible.

\n\n

I want to create a secure pathway through tor. So I installed Tails(not specifically but for the sake of its features) on my sd card and tried to turn my Pi on, but nothing, not even led's glow. I am sure that power adapter is working fine and same is ethernet cable.

\n\n

Does Tails has to do with any of this?\nI assumed that most of lightweight distros can run pretty fine on new dev boards. Should I install armbian instead of tails? if yes, then why won't tails work?

\n", "Title": "OS for Orange Pi zero plus", "Tags": "|security|wifi|", "Answer": "

Some Orange Pi components are in the mainline kernel and some are not. To have a full running Linux OS, you should use armbian.\nIf you want to use another distro than Debian/Ubuntu, you need to use Yocto in order to build your own image. There is a recipe for the Allwinner used in the Orange Pi Zero.

\n" }, { "Id": "4729", "CreationDate": "2019-12-09T01:53:17.933", "Body": "

I'm using Google calendar with Echo show by voice.

\n\n

But by default Echo show also displays upcoming events, and my friends coming to home can see it. It's a bit annoying.

\n\n

I can disable all notifications by config, but if possible I want to disable only notifications for the calendar events.

\n\n

Is there way for it? I couldn't find it how to do it.

\n", "Title": "How to use calendar without showing notification on display", "Tags": "|alexa|amazon-echo|", "Answer": "

I noticed the setting is not available Alexa app, but you can configure it directly by device.

\n\n

There is a display setting for calendar in:

\n\n

Settings->home, clock->home contents

\n\n

Probably the name of settings are bit different. (I'm using in Japanese)

\n" }, { "Id": "4731", "CreationDate": "2019-12-10T00:51:48.263", "Body": "

I am sending sensor data between two Raspberry Pis via lora. I am using two Lora Radios and NOT using a LoraWan network like The Things Network. How should I encrypt my data? Are there any open source python libraries?

\n\n

Thank you!!

\n", "Title": "Encrypting sensor data on Raspberry Pi?", "Tags": "|raspberry-pi|security|sensors|lora|", "Answer": "

Well if it's your devices and network, it's going to be way easier then usual, you can just use a symmetric key encryption like AES and hard code the key in both devices.

\n

To do this, I would recommend cryptography.io and you could use the Fernet method

\n

from cryptography.fernet import Fernet

\n

From there on, if you go to their website, it should be quite straight forward how to implement it. You generate a key, save it on both devices, use it before you send a message and after you receive one to encrypt and decrypt.

\n

Hope this helps, good luck!

\n" }, { "Id": "4764", "CreationDate": "2020-01-01T18:19:02.060", "Body": "

My Mosquitto server whole ACL file content:

\n\n
pattern write s1/%c\npattern read s1/%c\n
\n\n

I know there should be possibility to have single line:

\n\n
pattern readwrite s1/%c\n
\n\n

But my server complains with error:

\n\n
Empty invalid topic access type in acl_file.\n
\n\n

I suppose two lines should make the same like single readwrite one. Please, correct me if I'm wrong.

\n\n

I do subscribe to s1/ss from client ss:

\n\n
mosquitto_sub -h 192.168.1.8 -t s1/ss --cafile ca.crt -p 8883 -d -u b -P b -i ss\n
\n\n

And I do publish from client ss:

\n\n
mosquitto_pub -h 192.168.1.8 -t s1/ss --cafile ca.crt -m \"test\" -p 8883 -d -u b -P b -i ss\n
\n\n

According to Mosquitto log server informs it is happy with publish, but subscriber not receives message. Even worse, according to log at the same time it resubscribes to server.

\n\n
1577902083: New connection from 192.168.1.222 on port 8883.\n1577902083: Client ss already connected, closing old connection.\n1577902083: New client connected from 192.168.1.222 as ss (c1, k60, ub).\n1577902083: Sending CONNACK to ss (0)\n1577902083: Received PUBLISH from ss (d0, q0, r0, m0, 's1/ss', ... (4 bytes))\n1577902083: Received DISCONNECT from ss\n1577902084: New connection from 192.168.1.222 on port 8883.\n1577902084: New client connected from 192.168.1.222 as ss (c1, k60, ub).\n1577902084: Sending CONNACK to ss (0)\n1577902084: Received SUBSCRIBE from ss\n1577902084:     s1/ss (QoS 0)\n1577902084: ss 0 s1/ss\n1577902084: Sending SUBACK to ss\n
\n\n

Why my subscriber not receives message? Does my ACL lines are correct?

\n", "Title": "Subscriber not receives message from client", "Tags": "|mqtt|mosquitto|", "Answer": "

You can't have 2 clients connected at the same time with the same client id.

\n\n

You have hardcoded the same client id -i ss for both the publisher and the subscriber.

\n\n

Since you are starting the subscriber first, it will be kicked off the broker as soon as the publisher connects, hence it will not be connected when the publisher actually publishes the message, so will not see it.

\n\n

This has nothing to do with the ACL

\n" }, { "Id": "4769", "CreationDate": "2020-01-02T19:36:16.653", "Body": "

I want to build a motion detector that performs a similar service to security\nservices, which notify me upon intrusion and provide video feed. The other main\nrequirement is that I can update the sotware remotely. (For example, I can flash over-the-air\na Particle Photon, which is similar to Arduino.) I found this\nthread\nand this\nthread\nand still have questions on how to proceed.

\n\n

I would like a board that can:

\n\n\n\n

I know that people have done this on a Raspberry\nPi,\nbut my main requirement is a board for which I can flash the firmware remotely\n(as far as I know, the Pi and any \"normal\" computer does not).

\n\n

Has anyone built a similar system and could give some pointers?

\n", "Title": "Motion detector system that can be flashed remotely", "Tags": "|smart-home|sensors|arduino|home-security|", "Answer": "

You don't \"flash\" software to a pi in the same way you do a micro controller based device. You can easily update software OTA (over the air) to a Raspberry pi, just by copying the files to the device and restarting the application.

\n\n

If you want to automate this sort of thing then using a service like balena.io will do it for you. You just need to package your application up into docker containers.

\n" }, { "Id": "4770", "CreationDate": "2020-01-02T20:33:07.797", "Body": "

For one project I have used the Particle Photon, an IoT device similar to Arduino, and am considering changing to a different device for another project (a motion detector that has its own question). I can't find the difference between Particle products such as Photon, Argon, Xenon, and Boron; only that the Boron serves for mesh networks.

\n\n

Are they just newer versions, like Raspberry Pi 2, 3, and 4? Or do they have different purposes?

\n", "Title": "What is the difference between Particle Photon, Argon, Xenon, Boron, etc?", "Tags": "|arduino|", "Answer": "

The difference is the types of radio hardware each has.

\n\n\n\n

You can find the datasheets for all the devices here https://docs.particle.io/datasheets/wi-fi/photon-datasheet/

\n" }, { "Id": "4773", "CreationDate": "2020-01-03T05:53:34.790", "Body": "

I'm looking for ideas and suggestions to build a 220v ~ 40 amp wifi switch for controlling my pool pump so that it can replace my mechanical timer.

\n\n

Very few devices are available on the market. There are both relatively expensive and according to the reviews don't live up to expectations in terms of reliability and build:

\n\n\n\n

The main issue seems to come from the terminal connections not being big or sturdy enough for 10 gage wire.

\n\n

Looking at the Migro Outdoor Smart Wi-Fi Outlet Box it seems they have combined a contactor with a sonoff switch which seems like the best option. This video explains how a contactor works. Migro are charging around $140 whereas the contactor on it's own only costs $14 here and the sonoff switch $8 here.

\n\n

Does building one seem the best / cheapest option?

\n", "Title": "How to build a 220v ~40 amp WIFI switch (for pool pump or similar)", "Tags": "|smart-home|wifi|", "Answer": "

Looks like you've answered your own question :)

\n\n

Yes - a wifi controllable interface (likr Sonoff) hooked to act as the trigger for the contactor relay is probably your best/cheapest option. Saves you the issue of coding your own I/O control and easily manages overall power requirements.

\n" }, { "Id": "4789", "CreationDate": "2020-01-07T08:25:15.310", "Body": "

I have used esp32 in my projects and its easy to use. But I want to try using CC3235SF. But how does it work exactly? It has on-board RAM and 1MB Flash, so does it mean I can program it directly like I can do with ESP32 or do I have to use it with some other host MCU? Also there are many development boards in the market for ESP32 but none for CC3235SF. I also want to know whats different between the development boards and whatever people use in their commercial products?

\n", "Title": "What is the difference between Texas Instruments CC3235sf and ESP32?", "Tags": "|esp32|", "Answer": "

To answer the title of your question, the differences are quite numerous. At the very least, for a very very quick look at the specs:

\n\n\n\n

But there are probably more differences than similarities, other than the fact they're both MCUs with Wi-Fi.

\n\n

Since the CC3235SF is an MCU, yes, you can program it directly, though the tools to do that may be quite different.

\n\n

There is a development board for the CC3235SF: http://www.ti.com/tool/LAUNCHXL-CC3235SF

\n\n

A development board is targeted for development, so it may have more stuff than you actually need in a final product, as well as be missing parts you need and you add externally during development. Also, a development board is not necessarily certified (FCC, CE, etc.).

\n\n

When building a commercial product, you'll usually have specifically the components you need on your PCB, removing any unnecessary stuff and incorporating any external components, and you'll have the whole finished product undergo testing and certification. Of course, on very small quantities, you may sometimes use \"development boards\" directly.

\n" }, { "Id": "4799", "CreationDate": "2020-01-14T10:52:07.477", "Body": "

In London, and soon in Paris also, contactless bank cards can be used to pay for public transport.

\n\n

The system tracks a user's journey, through the \"tapping in\" and \"tapping out\" of the user; and then charges them the appropriate amount for the journey they have taken.

\n\n

My question is, how is this information tracked? I assume the system can't be writing information to my bank card, so is it sending the data to a central system somewhere? Or does it work another way?

\n", "Title": "How are journeys tracked via contactless card?", "Tags": "|protocols|", "Answer": "

There are a few references to this on the Wikipedia page for the Oyster card:

\n\n
\n

Since the Oyster readers cannot write to a contactless card, the reader when touching out is unable to display the fare charged for the journey, as the card does not have the starting point stored in it. This is calculated overnight once the touch in and touch out information is downloaded from the gates and collated.

\n
\n\n

(emphasis mine)

\n\n

There are a few more details in that section. Sadly the source in that section does not point anywhere.

\n" }, { "Id": "4804", "CreationDate": "2020-01-16T09:00:31.620", "Body": "

I'm making an android application to communicate with a bicycle security system using android studio. The application is using the MQTT protocol to talk to a remote server which will act as the broker and relay messages to the security system which has a microcontroller and a GSM module that allows for an internet connection.

\n\n

The problem I'm experiencing relates to the Eclipse Paho Mqtt Android Client. This is a neat library that contains functions to implement the MQTT protocol. So far I've managed to establish a connection between the server and the application. It least that's what it looks like. See the following image which shows my logcat messages.\"The

\n\n

It definitely appears to be connected, its pinging once a minute. The only problem is that when I try to verify that the connection is there using the isConnected function, it always tells me that I'm not connected. Below you can see my code. If you need to download the library or if you want to look at it, here is a link ->\nhttps://github.com/eclipse/paho.mqtt.android

\n\n

Also, I would like to explain my code a little so you know what you're looking at. Please read this carefully.

\n\n

It is in Kotlin. I made a Client_MQTT class that is instantiated in the main activity. This class holds an instance of the MqttAndroidClient (this is from the paho library). This instance is instantiated in the constructor and it's named MqttClient. There is a connect function which does all the busy work and connects to the server. There's also a function called ConnectionAlive. The only thing it does is return MqttClient.isConnected().

\n\n

The other portion of my code is the MainActivity where the Client_MQTT class is instantiated and the connect function is called. Most importantly, this is where I do a simple if check to see if the connection is alive. It always says no, but I know that can't be right.

\n\n

Now you may see the code.

\n\n
package com.chymera_security.application\n\nimport android.content.Context\nimport androidx.appcompat.app.AppCompatActivity\nimport android.os.Bundle\nimport android.util.Log\nimport org.eclipse.paho.android.service.MqttAndroidClient\nimport org.eclipse.paho.client.mqttv3.IMqttActionListener\nimport org.eclipse.paho.client.mqttv3.IMqttToken\nimport org.eclipse.paho.client.mqttv3.MqttException\nimport org.eclipse.paho.client.mqttv3.MqttClient\nimport androidx.core.app.ComponentActivity.ExtraData\nimport androidx.core.content.ContextCompat.getSystemService\nimport android.icu.lang.UCharacter.GraphemeClusterBreak.T\nimport androidx.fragment.app.FragmentActivity\nimport kotlinx.android.synthetic.main.activity_main.*\n\nclass MainActivity : AppCompatActivity() {\n\n  override fun onCreate(savedInstanceState: Bundle?) {\n    super.onCreate(savedInstanceState)\n    setContentView(R.layout.activity_main)\n    var Client = Client_MQTT(this.getApplicationContext()) //passing in the context\n    //var client = MqttAndroidClient(this.getApplicationContext(), \"tcp://io.adafruit.com:1883\",\n    //   ConnectThis.ClientId)\n    Client.connect()\n    if (Client.ConnectionAlive()) {\n     ViewAlarmStatus.text = \"Connected to server.\" //You can replace this with a simple log statement\n    } else {\n      ViewAlarmStatus.text = \"Not connected to server.\"\n    }\n  }\n}\n
\n\n

-

\n\n
package com.chymera_security.application\n\nimport android.content.Context\nimport android.util.Log\nimport org.eclipse.paho.android.service.MqttAndroidClient\nimport org.eclipse.paho.client.mqttv3.*\n\nclass Client_MQTT constructor(context_param: Context){\n\n  internal var options = MqttConnectOptions()\n    internal lateinit var ClientId : String\n    internal lateinit var MqqtClient: MqttAndroidClient\n    internal lateinit var context : Context\n\n    init {\n      options.userName = \"Put your username here\"\n      options.password = \"password here\".toCharArray()\n      ClientId = MqttClient.generateClientId()\n      context = context_param\n      MqqtClient = MqttAndroidClient(context, \"tcp://io.adafruit.com:1883\", ClientId)\n  }\n\n  fun connect() {\n    try {\n      val token = MqqtClient.connect(options)\n      token.actionCallback = object : IMqttActionListener {\n        override fun onSuccess(asyncActionToken: IMqttToken) {\n          Log.i(\"Connection\", \"Connected to server \")\n          //connectionStatus = true\n          // Give your callback on connection established here\n        }\n        override fun onFailure(asyncActionToken: IMqttToken, exception: Throwable) {\n          //connectionStatus = false\n          Log.i(\"Connection\", \"failure\")\n          // Give your callback on connection failure here\n          exception.printStackTrace()\n        }\n      }\n    } catch (e: MqttException) {\n      // Give your callback on connection failure here\n      e.printStackTrace()\n    }\n\n  }\n  fun ConnectionAlive (): Boolean{\n    return MqqtClient.isConnected()\n  }\n}\n
\n\n

If you have any knowledge about this please share. Am I actually connected to the server? It looks like it but the isConnected doesn't agree. Is there some conceptual misunderstanding about how I'm using isConnected? Am I just using it wrong, or is this some crazy bug? Would love to hear from you.

\n\n

Extra resources:\nhttps://medium.com/@chaitanya.bhojwani1012/eclipse-paho-mqtt-android-client-using-kotlin-56129ff5fbe7

\n\n

https://www.hivemq.com/blog/mqtt-client-library-enyclopedia-paho-android-service/

\n", "Title": "Question about Eclipse Paho Mqtt Android Client library function \"isConnected\"?", "Tags": "|mqtt|android|paho|eclipse-iot|", "Answer": "

So it turns out that the problem was not the isConnected() function. The problem was that I was checking whether it was connected too soon. The connect function is asynchronous, meaning that it executes in the background. So while it was trying to connect, I was asking it if it was connected before it was finished. To fix this, I added an onClickListener to one of my buttons and checked the status just by pressing the button a few seconds after the app started.

\n" }, { "Id": "4809", "CreationDate": "2020-01-18T14:58:22.493", "Body": "

I have got a Raspberry Pi setup to transmit things over socket from my project. It currently does this over cellular data on a pay as you go plan, so I would like to optimise it. All that needs to transmitted is three numbers, currently in the format of int,int,int. So, my question is which character uses the least data. I know this is nit picking, but it was more for theory.

\n\n

If it were possible a list would be better as I would also need a character for negative numbers.

\n\n

It will be from -100 to 100 but this could be any 200 range if reformatted at the other end

\n", "Title": "How can I transmit this information from my device using the least data possible?", "Tags": "|data-transfer|mobile-data|", "Answer": "

Slightly too large for a comment, and does not answer your question as posted, but ...

\n\n

Firstly, an upvote to @jcaron and, yes, it is a good idea to lean about data formats (integer/string/etc) and data compression (think .ZIP file).

\n\n

However, an alternative would be to consider an IoT SIM card. A bunch of these have sprung up in recent years. They are data only, no voice , and beware that some might be 2G only, which your country may not support, or may drop.

\n\n

The big advantage is that they charge per byte of data sent, and some don't even have a monthly fee.

\n\n

I first became aware of them when buying an Onion Omega 3G expansion (*). Which recommended Hologram.

\n\n

I will leave it to you to make your own decisions. Google for IoT SIM perhaps?

\n\n

You can look at the Amazon listing for the Hologram SIM, which includes comparisons with others.

\n\n

Here is the pricing for Hologram.

\n\n

Here for ThingsMobile.

\n\n

Here For Global M2M SIM.

\n\n

Generally, you are talking about 5 cents for a mB, which is lot of data when you are only sending 3 integers.

\n" }, { "Id": "4816", "CreationDate": "2020-01-22T20:20:36.590", "Body": "

Under Mongoose OS, I\u2019m trying to write some code to detect that a button was pushed. I wrote the code below, but it thinks the button is always pushed, except when I actually push it. When I push and hold it, the output stops. The button is wired between the GPIO pin and GND, with no pull-up resistor since there is an internal pull-up. I wonder if my code is wrong and would appreciate your comments, thank you.

\n

I have pasted the relevant code below:

\n\n
// GPIO 36\n#define BTN_MOB 36\n\n#ifdef BTN_MOB\nmgos_gpio_set_mode(BTN_MOB, MGOS_GPIO_MODE_INPUT);\n#endif\n\nstatic void button_cb(int pin, void *pParam)\n{\n  if(pin == BTN_MOB)\n    LOG(LL_INFO, ("***** BUTTON PRESSED\\r\\n"));\n}\n\nmgos_gpio_set_button_handler(BTN_MOB,\n                  MGOS_GPIO_PULL_UP,\n                  MGOS_GPIO_INT_EDGE_NEG, \n                  100 /* debounce ms */,\n                  button_cb, /* callback handler */\n                  NULL); /* arguments to callback handler */\n
\n", "Title": "Mongoose OS and button press detection", "Tags": "|gpio|", "Answer": "

As it turns out, GPIO pins 34, 35, 36 and 39 are actually GPI - input only and have no internal pullup or pulldown resistors. I switched to a different GPIO with internal pullup and this solved the problem.

\n" }, { "Id": "4830", "CreationDate": "2020-01-29T16:58:13.973", "Body": "

Is it possible to stream any audio from an Android phone to a multi-room speaker setup?

\n\n

I would like to invest in some multi-room speakers, but I do not use any streaming services. My media files, both audio and video, are scattered around locally on my hard drive and phone. I do not have a central media library. I watch a lot of videos on my phone's YouTube app.

\n\n

IKEA's SYMFONISK speakers seem like a good set of entry-level devices to set up a multi-room audio system. However, I'm unsure about how to actually stream music to them. To my research I can only use streaming services or a media library as input.

\n\n

I also own a Bluetooth headset that I can connect to my phone and it will play any auto, whether it is music, video or YouTube content. But to my current understanding the Sonos App is required for IKEA's speakers and I can only stream audio that I explicitly selected within the app. The speakers do not simply 'forward' all audio of my phone, correct?

\n\n

Long story short: Is there any solution for a cave man like me, who does not rely on streaming services? My phone would be most important for me, so don't bother about my Linux laptop too much.

\n\n

To my current understanding, I would be better of with a Bluetooth speaker instead of Wifi speakers. But Bluetooth speakers cannot be used as a multi-room setup and connectivity is always a big issue.

\n", "Title": "Stream any audio from Android phone to multi-room speakers?", "Tags": "|android|ikea-tradfri|audio|", "Answer": "

Today I am going to answer my own question: This is not intended to be an advertisement of any sort, but eventually I ended up buying a pair of Ultimate Ears BOOM 3 speakers. They are portable Bluetooth speakers, but have support for multi-room audio.

\n

There is a PartyUp mode to connect up to 150 speakers and broadcast the same music to all of them. You cannot, however, play different music in each room, unless you pair multiple sources. But for me, at least, these speakers cover all use cases that I had, when I posted my initial question back in January.

\n

In this setup, using Bluetooth, I can play audio from any app on my Android phone and broadcast it to multiple speakers. There is no need for additional apps or streaming services.

\n" }, { "Id": "4834", "CreationDate": "2020-01-31T12:13:54.103", "Body": "

EDIT: The question has changed after a reply from our network provider about the destination IP version.

\n\n

I'm experience problems sending data to an IPv6 (or IPv4) address using BC95-G modem using IPv6 network provider.

\n\n

Our NB-IoT provider states that:

\n\n\n\n

The provider also send us IPv6 SIM cards. The BC95 is supporting them and it seams it can open socket, but the send command fails.

\n\n

Below you can find a demo listing. In that I try to open both Ipv4 and IPv6 sockets for demonstration purposes. After all they both fail:

\n\n
AT+NRB\n\nREBOOTING\n\u0386[0C][00]\u0386[03]`\nBoot: Unsigned\nSecurity B.. Verified\nProtocol A.. Verified\nApps A...... Verified\n\nREBOOT_CAUSE_APPLICATION_AT\nNeul \nOK\nAT\n\nOK\nATI\n\nQuectel\nBC95-G\nRevision:BC95GJBR01A07\n\nOK\nAT+NCONFIG?\n\n+NCONFIG:AUTOCONNECT,TRUE\n+NCONFIG:CR_0354_0338_SCRAMBLING,TRUE\n+NCONFIG:CR_0859_SI_AVOID,TRUE\n+NCONFIG:COMBINE_ATTACH,FALSE\n+NCONFIG:CELL_RESELECTION,TRUE\n+NCONFIG:ENABLE_BIP,FALSE\n+NCONFIG:MULTITONE,TRUE\n+NCONFIG:NAS_SIM_POWER_SAVING_ENABLE,TRUE\n+NCONFIG:BARRING_RELEASE_DELAY,64\n+NCONFIG:RELEASE_VERSION,13\n+NCONFIG:RPM,FALSE\n+NCONFIG:SYNC_TIME_PERIOD,0\n+NCONFIG:IPV6_GET_PREFIX_TIME,15\n+NCONFIG:NB_CATEGORY,1\n+NCONFIG:RAI,FALSE\n+NCONFIG:HEAD_COMPRESS,FALSE\n+NCONFIG:RLF_UPDATE,FALSE\n+NCONFIG:CONNECTION_REESTABLISHMENT,FALSE\n+NCONFIG:PCO_IE_TYPE,EPCO\n\nOK\nAT+CGDCONT=1,\"IPV6\",\"iot\" // or AT+CGDCONT=1,\"IPV4V6\",\"iot\"\n\nOK\nAT+COPS=1,2,\"20201\"\n\nOK\nAT+CEREG=1\n\nOK\nAT+CSCON=1\n\nOK\nAT+CFUN=1\n\nOK\nAT+CEREG?\n\n+CEREG:1,1\n\nOK\nAT+CGATT?\n\n+CGATT:1\n\nOK\nAT+CGPADDR\n\n+CGPADDR:0,2A02:1388:400:B:2183:7DD4:B7F1:DE5A\n+CGPADDR:1\nOK\nAT+CSQ\n\n+CSQ:13,99\n\nOK\nAT+NUESTATS\n\nSignal power:-928\nTotal power:-862\nTX power:210\nTX time:549\nRX time:27140\nCell ID:290888\nECL:0\nSNR:114\nEARFCN:6390\nPCI:214\nRSRQ:-108\nOPERATOR MODE:2\nCURRENT BAND:20\n\nOK\nAT+NSOCR=DGRAM,17,1024,1,\"AF_INET6\"\n\n1\n\nOK\nAT+NSOCR=DGRAM,17,1025,1,\"AF_INET\"\n\n2\n\nOK\nAT+NSOST=1,xx.xx.xx.xx,pp,2,4C47 // xx: IP address, pp: dest. port\n\nERROR\nAT+NSOST=2,xx.xx.xx.xx,pp,2,4C47\n\nERROR\nAT+NSOCL=1\n\nOK\nAT+NSOCL=2\n\nOK\n
\n\n

Note that:

\n\n\n\n

EDIT: How can I send to a IPv6 IP address using BC95-G?

\n\n

With the same modem I can successfully send data using IPv4 SIMs(from vodafone). Does anyone succeed to send data using BC95-G and IPv6 SIM cards?

\n\n

Thanks.

\n", "Title": "NB-Iot send UDP data to IPv6 with BC95-G via IPv6-only provider", "Tags": "|nb-iot|", "Answer": "

As far as I know(until now), there is only one solution to this problem. Upgrade the firmware of BC95-x.

\n\n

After a discussion with quectel we came to conclusion that in order for AT+NSOST to work with ipv6 arguments, a new firmware must be running on the module. The version we tried is R02A02.

\n\n

In order to upgrade the firmware someone will also need an upgrade tool. Quectel suggest QFlash. Unfortunately, things get a little tricky here.

\n\n\n\n

Here you can find both the firmware and the upgrade utility in a zip with md5:89158f2384cb3fd086a972cdfb4efabb.

\n\n

After that, all seams to work.

\n\n
AT+NRB\n\nREBOOTING\n\u0386[0C]A\u0386[04]A\nBoot: Unsigned\nSecurity B.. Verified\nProtocol A.. Verified\nApps A...... Verified\n\nREBOOT_CAUSE_APPLICATION_AT\nNeul \nOK\nAT\n\nOK\nATI\n\nQuectel\nBC95-G\nRevision:BC95GJBR02A02\n\nOK\nAT+NCONFIG?\n\n+NCONFIG:AUTOCONNECT,TRUE\n+NCONFIG:CR_0354_0338_SCRAMBLING,TRUE\n+NCONFIG:CR_0859_SI_AVOID,TRUE\n+NCONFIG:COMBINE_ATTACH,FALSE\n+NCONFIG:CELL_RESELECTION,TRUE\n+NCONFIG:ENABLE_BIP,FALSE\n+NCONFIG:MULTITONE,TRUE\n+NCONFIG:NAS_SIM_POWER_SAVING_ENABLE,TRUE\n+NCONFIG:BARRING_RELEASE_DELAY,64\n+NCONFIG:RELEASE_VERSION,13\n+NCONFIG:RPM,FALSE\n+NCONFIG:SYNC_TIME_PERIOD,0\n+NCONFIG:IPV6_GET_PREFIX_TIME,15\n+NCONFIG:NB_CATEGORY,2\n+NCONFIG:RAI,FALSE\n+NCONFIG:HEAD_COMPRESS,FALSE\n+NCONFIG:RLF_UPDATE,TRUE\n+NCONFIG:CONNECTION_REESTABLISHMENT,FALSE\n+NCONFIG:TWO_HARQ,FALSE\n+NCONFIG:PCO_IE_TYPE,EPCO\n+NCONFIG:T3324_T3412_EXT_CHANGE_REPORT,FALSE\n+NCONFIG:NON_IP_NO_SMS_ENABLE,FALSE\n+NCONFIG:SUPPORT_SMS,TRUE\n+NCONFIG:HPPLMN_SEARCH_ENABLE,TRUE\n\nOK\nAT+CGDCONT=1,\"IPV6\",\"iot\"\n\nOK\nAT+COPS=1,2,\"20201\"\n\nOK\nAT+CEREG=1\n\nOK\nAT+CSCON=1\n\nOK\nAT+CFUN=1\n\nOK\nAT+CEREG?\n\n+CEREG:1,1\n\nOK\nAT+CGATT?\n\n+CGATT:1\n\nOK\nAT+CGPADDR\n\n+CGPADDR:0,2A02:1388:400:B:2183:7DD4:B7F1:DE5A\n+CGPADDR:1\nOK\nAT+CSQ\n\n+CSQ:13,99\n\nOK\nAT+NUESTATS\n\nSignal power:-928\nTotal power:-862\nTX power:210\nTX time:549\nRX time:27140\nCell ID:290888\nECL:0\nSNR:114\nEARFCN:6390\nPCI:214\nRSRQ:-108\nOPERATOR MODE:2\nCURRENT BAND:20\n\nOK\nAT+NSOCR=DGRAM,17,1024,1,\"AF_INET6\"\n\n1\n\nOK\nAT+NSOST=1,xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx,207,6,4C4700000000\n\n1,6\n\nOK\n\n+CSCON:1\nAT+NSOCL=1\n\nOK\n\n+CSCON:0\nAT+CFUN=0\n\n+CSCON:1\n\n+CSCON:0\n\nOK\n\n+CEREG:0\n
\n\n

Hopefully this will help others who have the same problem.

\n" }, { "Id": "4838", "CreationDate": "2020-02-03T11:06:15.420", "Body": "

What are the benefits of using the Greengrass vs the Node SDK package?

\n\n

We have a software stack happily interacting with IoT Core services (running on buildroot) using the Node SDK. I'm trying to understand the benefits of converting over to a full Greengrass implementation.

\n", "Title": "Benefits of Greengrass over IoT SDK", "Tags": "|aws-iot|aws-greengrass|nodejs|", "Answer": "

Like with most AWS services, they are providing a bre-built and managed framework for you that you don't have to maintain. IoT device security, configuration, local Lambda software updates, cloud-based device state shadowing, local resource management, MQTT subscription management, easy AWS service integration (like SNS and analytics), log management, and deployment version control, are all handled server side once the GG client is installed on a device. The architecture also allows for IoT devices that are not always cloud connected, that communicate by proxy via a local GG Core device. All of the core IoT features still apply to GG devices as well (namely security and analytics).

\n\n

Depending on your application, it may still make sense to roll your own framework. But if the Greengrass architecture is a fit, it makes it a lot easier to roll out and maintain applications, albeit at the cost of additional vendor lock-in. That said, if you were to migrate an existing IoT application to GG, you might end up having to re-architect much of what you already have in place in order to take advantage of the features Greengrass has to offer.

\n" }, { "Id": "4855", "CreationDate": "2020-02-07T15:58:43.893", "Body": "

IOT device Shelly Dimmer could not connect to cloud.

\n

Error:

\n

Device is already owned by another User

\n", "Title": "Shelly device is already owned by another User", "Tags": "|smart-home|", "Answer": "

Answer from shelly support is:

\n\n
\n

About \"\"Device is owned from another customer\" \n 3 years ago we never expect to sell so much devices and the unick IDs running out. \n All new devices which we manifacturing now using whole MAC address as ID instead last 6 digits. \n Everyone who has this issue need to send a message to Dimitar Stanishev or Me, to provide a special firmware which will resolve it (extend ID).\n Next week this procedure will be made automaticaly from the APP.

\n
\n\n

See also support channel topic:\nhttps://www.facebook.com/groups/ShellyIoTCommunitySupport/permalink/2552658274833521

\n\n

Or solution mentioned in:\nhttps://www.facebook.com/groups/ShellyIoTCommunitySupport/permalink/2578113305621351/

\n\n

According to this you have to flash your shelly with a special firmware. You can do this using the link below (please adapt the IP of your shelly):

\n\n
\n

Open this link on your phone using device ip\n This Is a firmware for conflicting devices on cloud.\n You ll see a White screen with with code After update.\n Do a factory reset, don't update until the device run correctly.

\n
\n\n

http://192.168.0.100/ota?url=http://shelly-api-eu.shelly.cloud/firmware/SHSW-1_patch/switch1_longid.zip

\n\n

Attention

\n\n

Make sure to select suiteable firmware for your device.

\n\n

ota-Update f\u00fcr Shelly 1:

\n\n

http://shelly-api-eu.shelly.cloud/firmware/SHSW-1_patch/switch1_longid.zip

\n\n

ota-Update f\u00fcr Shelly Plug S:

\n\n

http://shelly-api-eu.shelly.cloud/firmware/SHPLG-S_patch/plug-s-longid.zip

\n\n

ota-Update f\u00fcr Shelly Dimmer:

\n\n

http://shelly-api-eu.shelly.cloud/firmware/longid_patch/dimmer-longid.zip

\n\n

ota-Update f\u00fcr Shelly H&T:

\n\n

http://shelly-api-eu.shelly.cloud/firmware/longid_patch/ht-longid.zip

\n\n

ota-Update f\u00fcr Shelly 2.5:

\n\n

http://shelly-api-eu.shelly.cloud/firmware/longid_patch/switch25-longid.zip

\n\n

See also:

\n\n

https://www.shelly-support.eu/forum/index.php?thread/1762-anweisung-zur-firmware-aktualisierung-bei-id-konflikt/

\n" }, { "Id": "4872", "CreationDate": "2020-02-12T21:41:04.767", "Body": "

I need to determine when a van leaves or enters a geofence range, and alert a system on my side - either via an API call or email sent - so that I can digest that data for later. My blocker is finding such a device that will work with my requirements.

\n\n

My goal is to be able to set a trigger of leaving/entering the geofence, and via internet connection that the device emits it will ping an API endpoint that I will build. The API endpoint will accept a POST of all data that the device can send to me. From there my API will update a database for additional logic.

\n\n

If pinging an API isn't possible, I could settle for an email being sent with this POST data, and my API could scrap that email for the data that I need.

\n\n

I am seeking advise on if you know a device that would meet this criteria. I am sinking tons of hours reviewing devices, their manuals, calling sales reps, etc. I am reaching out to a member of the IoT community to help me find such a device.

\n", "Title": "Advise on device to use when tracking a moving vehicle and pushing data to an API", "Tags": "|gps|tracking-devices|rest-api|geo-tagging|", "Answer": "

Assuming the van as an ODB-2 port (most do) then you can just grab something like this. This assumes that you need something that reports in real-time, since you'd have to pay for a data plan. That one has an app so there must be some kind of api that it is reporting to; that system may have an open api you could leverage.

\n\n

Even if it doesn't, this one also sends text messages which you could forward to your application using something like Twilio. Then your app just needs to digest the text messages.

\n\n

If this is for fun, building something yourself is probably more appealing. But if this is for a business application (where you don't want to mass produce something you've prototyped) then the advantages of being able to just buy an off the shelf product are numerous.

\n" }, { "Id": "4885", "CreationDate": "2020-02-15T02:37:51.967", "Body": "

I'm using an ESP8266 NodeMCU-12e, connecting to Cayenne IOT platform. Most things work fine (I can update most widgets using virtual channels), but some widgets are unavailable (greyed-out), or simply don't work.

\n

For example, trying to add a Generic Sensor (or generic actuator etc):

\n

\"enter

\n

...or Luminosity Sensor:

\n

\"enter

\n

...the "Select Device" drop down is greyed out, and I can't proceed any further with configuring and adding the widget.

\n

If I manually add a 2-State widget, I can configure and add it, but even though the on/off trigger data is being received by Cayenne, the widget does not toggle on/off. It remains in an always-off state no matter what.

\n

I've tried adding my device as a Generic ESP device, as well as a "Bring your own device", to no avail.

\n

I've scoured for hours online, but haven't found anything. Does anyone know what the problem may be?

\n", "Title": "Cayenne IOT widgets greyed out using ESP8266 NodeMCU-12e", "Tags": "|esp8266|arduino|", "Answer": "

I've sorted out both issues.

\n\n

For the issue of some of the sensors/widgets having their \"Select Device\" greyed out, I got around that by creating a new top-level device, and lying to the system by saying I was using an Arduino Uno with ESP8266 Wifi.

\n\n

To get the 2-State toggle widgets to work (with my ESP8266 NodeMCU-12e), I updated my code to include specifically I'm working with a digital sensor.

\n\n

When it wasn't working, I had:

\n\n
Cayenne.virtualWrite(FLWR_LIGHT_1000W, lightState);\n
\n\n

...to:

\n\n
Cayenne.virtualWrite(FLWR_LIGHT_1000W, lightState, \"digital_sensor\", \"d\");\n
\n\n

The 2-State widget now toggles properly as the data arrives.

\n" }, { "Id": "4908", "CreationDate": "2020-02-27T07:24:30.007", "Body": "

I am redesigning our bathroom and currently looking at two different light sources

\n\n\n\n

I want the bulb also to be dimmable. I found this bulb, which needs a classic triac dimmer. Since I want to control the lights using a self build controller (based on the SAM51J) I do not know the correct approach.

\n\n

I was looking for dimmable 12V bulbs but was not able to find anything commercial. Either they are dimmable with a LED-driver or via \"Smart\"-Bulbs (zigbee or similar).

\n\n

Is there a bulb with E27 which I can dim over a 12V LED-Driver?

\n", "Title": "Dimmable LED bulb", "Tags": "|lighting|smart-lights|", "Answer": "

In the meantime I found a solution, which also changed the design:

\n\n" }, { "Id": "4919", "CreationDate": "2020-03-05T10:36:51.023", "Body": "

I have a CC2531 connected via USB to my server to read/control some Zigbee devices. I use the version with external antenna.

\n\n

\"enter

\n\n

I found that the location of the server is not optimal and I would like to move the sniffer to another place (near the wifi router), where I don't have a USB connection to the server available. The router cannot run OpenWRT so I cannot connect the CC2531 to the router and expect it to work.

\n\n

Adding a ES8266 to obtain a TCP-serial port would be a good solution, such as described in Zigbee2MQTT bridge however those instructions expect a CC2530, identical to CC2531 except for the lacking USB interface.

\n\n

How to connect ESP8266 directly to CC2531 via serial?

\n", "Title": "Can I use the CC2531 Zigbee sniffer with a ESP8266 via serial, no USB involved?", "Tags": "|esp8266|zigbee|", "Answer": "

I was able to connect to CC2531 via serial by treating it as a CC2530 and ignoring its USB: I flashed CC2530 firmware to it, then connected the serial lines as you would to a CC2530 -- to the P0.2 and P0.3 pins, and powered it via the Vcc/GND pins.

\n\n

Tested by running Tasmota 8.2.0 against it and pairing one endpoint. Everything worked as expected.

\n\n

As I'm writing this, the Tasmota Zigbee docs are still saying a CC2531 cannot be used, I'll get that fixed :-)

\n\n

All of this was with a CC2531 made by Ebyte (model E18-2G4U04B) and the CC2530_DEFAULT_20190608 Z-Stack 1.2 coordinator firmware by Koenkk. The dongle made by TI should probably work too, though I don't have one to confirm this.

\n" }, { "Id": "4921", "CreationDate": "2020-03-05T19:26:59.617", "Body": "

The Zigbee spec defines the types of nodes: coordinators, routers, and end devices. I'm looking to buy some Zigbee devices, but I can't find in the device manual if the device is a Zigbee Router or an end device. Example device manual

\n\n

How can I tell if a Zigbee device is a Zigbee Router? Can you provide an example of a device which is labeled as a Zigbee router?

\n", "Title": "How to tell if a Zigbee device is a Zigbee Router", "Tags": "|zigbee|mesh-networks|", "Answer": "

In general, if it is connected to mains power, it\u2019s a router, whereas if it is battery-powered it\u2019s an end device. Not sure this is 100% true, but probably very close to it.

\n\n

This page shows it is an FFD (full function device), so it can act as a router.

\n" }, { "Id": "4927", "CreationDate": "2020-03-09T10:32:36.200", "Body": "

I have a BG96 Arduino Shield module connected to an OpenMV Cam H7. I need to send the image to the cloud for further processing. I send each pixel of the image, and I can use HTTPS, UDP or MQTT. My questions are:

\n\n

-Is it possible to use my own computer as a cloud environment?

\n\n

-I will only send a couple of pictures for documenting, so do I need to set up something like a virtual machine?

\n", "Title": "Sending an image to the cloud", "Tags": "|mqtt|https|", "Answer": "

Your question is a bit confuse for this forum, however we can provide you some pointers so that you can rephrase your question and progress in the topics.

\n\n

You do not need any kind of VM, however to do that you need:

\n\n\n" }, { "Id": "4931", "CreationDate": "2020-03-10T10:43:52.267", "Body": "

A lot of documentation exists for RESTful APIs via tools like Swagger. But I am looking for a documentation tool or something in the likes for open data sharing for protocols like MQTT or OPC-UA.

\n

A practical scenario maybe where I have a data source which I publish in a timely fashion to an MQTT Broker and I make avail the broker's address as well as the documentation of the topics which can be subscribed to, what is the data format etc. I actually haven't stumbled upon anything of the sort except for the respective RESTful APIs to the Databases where these data are stored and called in the end to query dumps of such data.

\n

Are there already software tools, standardization bodies, etc. that are looking into such aspects?

\n", "Title": "Documentation similar to open API for IoT protocols like MQTT for open data sharing", "Tags": "|mqtt|data-transfer|", "Answer": "

Check AsyncAPI: https://www.asyncapi.com/docs/getting-started/coming-from-openapi/

\n\n

From their website:

\n\n
\n

Open source tools to easily build and maintain your event-driven architecture. All powered by the AsyncAPI specification, the industry standard for defining asynchronous APIs.

\n
\n" }, { "Id": "4939", "CreationDate": "2020-03-12T17:26:35.230", "Body": "

I want to set up a simple NAS at home, I've seen some options on the internet but they're either too complex or use a usb dock for the drive, which limits the bandwidth. I was wondering if it is possible to achieve a nice setup by using a usb-c 3.1 SSD, but I don't know much about the current boards, the Raspberry 4 seems to have a usb-c but for power supply. Is there a board that would fit this purpose? or some kind of component to connect the ssd to some board to take advantage of the usb-c 3.1 bandwidth?

\n", "Title": "Board with usb-c 3.1 for data transfer", "Tags": "|usb|", "Answer": "

USB is a bit of a mess right now with USB-A, USB-C, USB 3.0, USB 3.1 Gen 1 and Gen 2, USB 3.2 Gen 1, Gen 2 and Gen 2x2. But you don't need a type-C connector to get the speeds you need.

\n\n

For a NAS for home use (i.e. with probably a limited number of users, unlikely to all actually use the NAS at the same time, with at best Gigabit Ethernet connections), as long as you get over standard USB 2.0, your limiting factor is much more likely to be the network (or the CPU) than the interface between the CPU and the drive.

\n\n

Any of USB 3.x variants support at least 3.2 Gbit/s of throughput, which is much more than Gigabit Ethernet or any of the current actual Wi-Fi speeds available.

\n\n

So anything that supports USB 3.x should do, though you may very well have bottlenecks other than the interface itself:

\n\n\n\n

A Raspberry Pi 4 has two USB 3.0 interfaces (in standard USB-A shape, with a blue colour), they should be more than enough to connect a suitable SSD drive.

\n\n

Raspberry Pi 4 USB benchmarks show performance in the region of 320-360 Mbytes/s, which translates to 2.5 to 2.8 Gbit/s. Again, a lot more than what even the Gigabit Ethernet interface can deliver. RAM has decent performance as well, and ditto for the Gigabit Ethernet. Do not count on Wi-Fi for decent results.

\n\n

So if your drive comes with a USB-C connector, all you need is a USB 3.0 USB-A to USB-C cable. You can recognise it by the blue colour of the USB-A plug (and if you look closely you'll see there are 9 contacts in there instead of the 4 in USB-A 2.0). The drive actually probably comes with such a cable.

\n\n

Note that in your question you mention USB docks \"which limit the bandwidth\". This really depends on exactly what kind of USB and interface.

\n" }, { "Id": "4949", "CreationDate": "2020-03-18T11:34:27.320", "Body": "

I am new to IoT and LoRa, and still trying to grasp the scope and capabilities after discovering a whole technology I never knew existed.

\n\n

I have read some about LoRaWan and IoT, but am confused about the technology. Is LoRaWan and IoT wide area networks that are independent of the world wide web (www) or are they part of it? I also am not clear on the capability and functionality of LoRa devices.

\n\n

This is what I am trying to do, and would like to see if I can utilize LoRa technology to accomplish this:

\n\n

Basically I want to set up a wireless link between two sites which are ~.5 mile apart in which siteA will be connected to the internet (www) and the siteB may connect to the internet through siteA's gateway. I find information indicating there are devices which interface LoRa devices to ethernet. Is it possible to use these devices to set up a wireless link over LoRa?

\n\n

Example, ethernet-to-LoRa interface is connected to the gateway at siteA and to a LoRa device. LoRa device at siteA communicates with LoRa device at siteB, which is also connected to a ethernet-to-LoRa interface. Now a computer or WiFi access point is connected to siteB ethernet-to-LoRa interface, and user at siteB is able to access the world wide web.

\n\n

I understand there will be a sacrifice in network speed, and I wouldn't expect to be able to stream movies etc.

\n", "Title": "Using LoRa as a www link", "Tags": "|wifi|lora|lorawan|", "Answer": "

I think you missed how slow LoRa is. It is very, very, very, very slow.

\n\n

The slowest data rate (at SF12, BW125) is just 250 bits per second. That's 31 bytes per second. Just the text of this answer would take 50 seconds to send, without counting any overhead. This full page, including contents, styles, scripts, images, etc. is over 3 MB, that would take over a day to send!

\n\n

The fastest data rate varies between 11000 and 21900 bits per second depending on the region. Much faster, but that still brings us to the speeds we were used to back in 1994 or thereabouts. The web has since evolved to take advantage of the multi-megabit speeds afforded by broadband, so again, a page like this one would take about 20 to 40 minutes to send.

\n\n

Also, in some regions, there are regulatory restrictions, and you can't transmit more than 1% of the time in the same band for instance, so that would multiply everything by 100!

\n\n

LoRa is designed for sensors which send very very little data very very rarely, not to exchange files, transmit large amounts of data, browse the web or anything like that.

\n\n

You may want to consider long-distance Wi-Fi (though, again, there are EIRP limitations), WiMax, and other point-to-point radio technologies. Note that radio technologies don't like obstacles. Hills, buildings, trees, etc. will reduce your range a lot (or even completely block transmission). You want not only line of sight between the two antennas, but you need that line of sight to remain well above any obstacle (see Fresnel zone for details).

\n\n

Or install a separate Internet access, or use a 4G modem, at the second location.

\n" }, { "Id": "4954", "CreationDate": "2020-03-21T04:03:46.270", "Body": "

I have build a door sensor where Arduino uno reads reed switch and transmits RF 433MHz signal to other Arduino uno connected RF receiver. I am using rcswitch library and cheap ask/ook transmitter/ receiver modules from AliExpress,\u00a0433Mhz RF Wireless Transmitter Module and Receiver Kit .

\n

The range I get is less then 1m. This is with 17 cm 0.6 mm straight wire connected as antenna on transmitter as well as receiver. I read on internet that these receiver modules are noisy and useless.

\n

Will the transmitter I have work with HC12 receiver module? I want a 15-20 m range to cover all doors and windows.

\n

Idea is to use many such cheap transmitters and one good receiver at central place to keep the overall cost low.

\n", "Title": "Fs1000a generic cheap RF transmitter compatibility with Hc12 receiver?", "Tags": "|arduino|wireless|", "Answer": "

I also couldn't help but to succumb to the temptation of these cheap 433.92Mhz Rx/Tx pairs. Mine are of the eBay variety ($0.89 a pair with free shipping).

\n\n

I observed the same behavior with 1/4 wave whips attached to these, although not quite as bad as you describe. Through tinkering I found lower AWG sized enameled wiring to give slightly better results, about an extra two meters using 23AWG enameled magnet wire line of sight, but was barely penetrating the drywall to the next room... Not good enough for my projects unfortunately, so back to the drawing boards I went.

\n\n

As a side note, the quarter wave whip had to be spiraled in my project boxes which will change the radiation pattern of the antenna. (Found this out when I turned to my RTL-SDR for troubleshooting)

\n\n

A friend told me to try a \"coil-loaded antenna\" which is an antenna with air-cored inductor. Did some searches for 433Mhz coil-loaded antennas and found plenty of results, including calculators to build them for any frequency desired. These types of antennas also have the added benefit of changing the size of our wave-guide to a more compact form.

\n\n

Here is a step by step PDF for building 433.92Mhz Coil-Loaded antennas written by Ben Schueler (NL). I was very pleased with the performance of these DIY antennas. I also was still using the 23AWG to make them,(make sure its enamel insulated or it will not work) which wound up being very close to these instructions designated magnet wire size. It still didn't reach the far end of the house on the second floor...

\n\n

Here is an interesting bit about the transmitter, I didn't find this info in any instructions for these. My eBay seller had this listed in the description section of the listing for them.

\n\n

\"cheap\n


\nAh, the plot thickens. I highly refute this distance listed for the transmitter, and unless i just missed something I'm calling b.s... So, here is what i gathered from tinkering with them.

\n\n

On a 3.3v Rail

\n\n\n\n

On a 5v Rail

\n\n\n\n

5v and above

\n\n

I used one of these cheap ebay boost modules \"cheap mainly because I already had some, and I am pretty lazy when it comes to making circuits like this. I carefully adjusted the 10k pot while watching the voltage with a multi meter, each +1 voltage step I tested the range.

\n\n

Here is what I found:

\n\n\n\n

All in all these aren't bad for the price, (If you know the above information) The sweet spot voltage for these seem to be in the 9-volt range as it doesn't heat up, and gets pretty decent distance. (My max was 250ft LOS, keep in mind my transmissions are 8bit codes).

\n\n

I hope this helps you and future readers out with projects!

\n" }, { "Id": "4972", "CreationDate": "2020-03-29T03:33:53.627", "Body": "

I am wondering if anyone knows about some existing 'off the shelf\" hardware options where some device could connect to this Bluetooth \"Input Hub\" where this hub can have inputs plugged into it and any data received via that input would inform the connected Bluetooth device that sensor X sent value Y.

\n\n

This would allow any Bluetooth capable device to receive/read inputs and do something with that information.

\n\n

Not sure where to ask if this is not the right place. Thanks in advance.\nAlso does Bluetooth disconnect after some long time of being connected?

\n\n

(Output is not required but bonus if it can)

\n", "Title": "Bluetooth Input Hub", "Tags": "|sensors|hardware|bluetooth|", "Answer": "

Your question is very very broad, but here are a few starting points...

\n\n

Nearly any device with BLE support and GPIO inputs could serve that purpose, including all those that are ESP32-based (expect those based on the new ESP32-S2 which doesn't have Bluetooth), NRF51* or NRF52* based devices, and many more.

\n\n

In terms of development boards:

\n\n\n\n

In terms of \"packaged\" devices (but programmable, and with inputs available, though not quite easily accessible when in their package, but they also have built-in sensors):

\n\n\n\n

Really, the choice is yours. Depending on what you want to connect to them, the features you want, how you want to power it, whether you want a prepackaged device or have more freedom, your favorite language (Python, C, Javascript...), and so on, one or the other (and possibly many, many more) could be the best fit.

\n" }, { "Id": "4983", "CreationDate": "2020-04-02T12:28:29.647", "Body": "

For using ip camera you need to use app on your phone. When you setup camera you connect to it on your phone. But if you need to give somebody access to this camera, can you import settings from already connected phone and send them to other peson (in other city), so he can connect to this camera.

\n\n

Or maybe I can connect using ip address? But how I can get it?

\n\n

Probably it depends on the type of camera, what about EZVIZ C6CN 1080p Indoor Pan/Tilt Wi-Fi Security Camera

\n", "Title": "Can I connect to surveillance camera remotely without direct access?", "Tags": "|surveillance-cameras|ip-address|", "Answer": "

Typically how the security camera setups work is, each vendor has their own app or web site where you can access your video stream. So once you have access to this app or website, you can share the credentials with any one and they can view the stream on the same app or website on they devices. So all you need is the authentication credentials to get access to view the stream.

\n" }, { "Id": "4990", "CreationDate": "2020-04-05T11:56:52.910", "Body": "

How does Google home manage the connection to stream security camera video to a smart TV? Does Google home work as a proxy when the camera is streaming the video, or somehow it command (How?) TV to pull stream directly from the camera without Google home in the middle?

\n\n

I know that Google home SDK can bypass the cloud, but how it works locally?

\n\n

Is it?

\n\n
Security camera -> Google home -> smart TV\n
\n\n

or Google home manage to make a direct connection? if yes, then How?

\n\n
Security camera -> smart TV \n
\n\n

Thanks

\n", "Title": "When stream security camera on TV using Google Home speaker, does the stream go through Google home?", "Tags": "|google-home|communication|streaming|", "Answer": "

This really depends on how the camera manufacture has set things up, but it will not go via the Google Assistant device (in this case the speaker).

\n\n

Google Assistant Camera support works in much the same way as any other Chromecast video stream. This means that Google Assistant will send 2 URLs to the TV.

\n\n

The first is to the viewer application. This is basically a HTML page that contains a video element it may be the generic Google provided one that is just a full screen video, or it may be a camera vendors customised version that may have their logo overlaid or similar.

\n\n

The second URL is where to find the stream from the camera and here there are 2 options:

\n\n
    \n
  1. The camera natively supports one of the video formats that the Chromecast spec supports, in this case it could be a direct link to the camera.

  2. \n
  3. The camera doesn't support any of the required video formats so the camera streams it's output to a cloud service run by the manufacturer where it can be transcoded and the link points to the cloud service.

  4. \n
\n\n

Modern cameras tend to be capable of option 1 as this allows for the lowest latency and is more secure (since the video never leaves your home network). But having said that it means that the video feed is only available if the TV is on the same network. Option 2 means that you can also view the video stream when away from home, e.g. on your phone, which tends to be one of the key use cases for these sorts of camera.

\n\n

Either way the video does not actually pass through your smart speaker (unless you are viewing the video on a Google Home Hub device with a screen).

\n\n

The Google Home Local Control SDK is different and does not apply in this case.

\n" }, { "Id": "4992", "CreationDate": "2020-04-05T19:22:01.273", "Body": "

I am very new to IOT (primarily in Connected Cars segment) and came across the terms like: Shoulder Tap , Using SMS to awake device. I did search online, but didn't get proper information.

\n\n

Can anyone please help me understand what is Should tap, what it is used to awake and how do we achieve this? I read using SMS.

\n\n

PS: I am not sure which tag to use for this question, so if anyone can suggested would be of great help.

\n", "Title": "What is Shoulder Tap in IOT?", "Tags": "|aws-iot|", "Answer": "

Many IoT devices are battery-powered, and to conserve battery, enter sleep modes as much as they can. They usually get out of sleep in either of two cases (or both):

\n\n\n\n

When they wake up, they connect to a server, send their data, may check for data from the server (configuration updates...), and then go back to sleep.

\n\n

The issue is that if you want to talk to a device immediately, it won't be possible, you'll have to wait for the next wake-up connection.

\n\n

\"Shoulder tap\" is a way to remotely wake up the device. The idea is that most of the device is off, and only a low-level cellular connection is maintained, in some low-power mode. Then, whenever the server wants to talk to the device, it sends an SMS, and the modem will wake up the device to do whatever is requested.

\n\n

This requires the device to have a cellular modem, which is capable of getting into low-power modes but still receive SMS messages, that the modem is able to wake up the device. Maintaining the cellular connexion in a state that allows reception of SMS will most likely increase the power draw quite a bit compared to the usual deep sleep modes, so the benefit comes at a cost.

\n\n

The exact details vary a lot with the type of device, what kind of battery it can count on, how often and for how long it would be awake or sleeping, the sleep modes in use, the type of modem, the kind of cellular connection, and so on.

\n\n

What is your actual use case, and what is the actual problem you are trying to solve?

\n" }, { "Id": "5007", "CreationDate": "2020-04-13T15:10:35.823", "Body": "

I'm curious to know what the performance of an MQTT server is like. So I created this Python script:

\n\n
import time\nimport paho.mqtt.client as mqtt\nfrom multiprocessing import Pool\n\n\ndef on_connect(client, userdata, flags, rc):\n    print("Connected with result code "+str(rc))\n\n    client.subscribe("test")\n    while True:\n        client.publish("test","hello world")\n\n\ndef on_message(client, userdata, msg):\n    print(msg.topic+" "+str(msg.payload))\n\n\nclient = mqtt.Client()\nclient.on_connect = on_connect\nclient.on_message = on_message\n\nclient.connect("myserver.example.com", 1883, 60)\nclient.loop_forever()\n
\n

When I run this script with python3 myscript.py, the terminal just stops with this message Connected with result code 0.

\n

When I get rid of the while loop and just do this:

\n
def on_connect(client, userdata, flags, rc):\n    print("Connected with result code "+str(rc))\n\n    client.subscribe("test")\n    client.publish("test","hello world")\n    client.publish("test","hello world")\n    client.publish("test","hello world")\n    ... print this another 20 times ...\n
\n

Everything works. I can even receive this messages from another client device subscribed to the MQTT server. Things only stop working if I put the client.publish inside of an infinite while loop. I also tried using asynchronous calls within the while loop and I tried putting a time.sleep() in the while loop, but this still had no change ---- that is, the while loop still hangs.

\n

What am I doing wrong? How do I get my Python script to continuously publish to the MQTT server?

\n", "Title": "Unable to publish MQTT server in an infinite while loop in Python script", "Tags": "|mqtt|paho|python|", "Answer": "

You should not be running long running (infinite loops) in the callbacks.

\n

All the callbacks run on the client network thread's main loop (the one started by client.loop_forever()).

\n

This means if the on_connect() thread never returns it will never get to handling the calls to client.publish() in the loop.

\n

The individual client.publish() calls work because you build up a queue of messages to be published, then the on_connect() returns and the thread can process that backlog.

\n

P.S. your micro benchmark is not likely to give realistic results, but if you really want to run it you can try:

\n\n
import paho.mqtt.client as mqtt\n \ndef on_connect(client, userdata, flags, rc):\n    print("Connected with result code "+str(rc))\n    client.subscribe("test")\n\n\ndef on_message(client, userdata, msg):\n    print(msg.topic+" "+str(msg.payload))\n\n\nclient = mqtt.Client()\nclient.on_connect = on_connect\nclient.on_message = on_message\n\nclient.connect("myserver.example.com", 1883, 60)\nwhile True:\n    client.publish("test","hello world")\n    client.loop()\n
\n" }, { "Id": "5011", "CreationDate": "2020-04-16T14:33:00.107", "Body": "

I have an architecture where many sensors (hundreds of them), located in different places (hundreds of kilometers apart), send data to a remote database. Is MQTT suitable for this kind of configurations?.

\n\n

I was thinking in installing the MQTT broker and my backend in the same server and make the backend subscribe to one topic where all the sensors would be writing. So there would not be any communication between sensor nodes; the communication will be only between each sensor node and the server.

\n\n

Also, the sensor nodes would be gruped by client (maybe 10 nodes in the same place).

\n", "Title": "MQTT for many-to-one communication", "Tags": "|mqtt|sensors|", "Answer": "

This sort of thing is perfect for MQTT.

\n\n

You also don't need to have all the sensors publishing on the same topic, they could all publish to the same topic prefix and the processing app can use a wildcard subscription to have all the messages delivered. Or you can have multiple backend processing apps that split the load by subscribing to different wildecards.

\n\n

e.g. a topic made up like

\n\n
 country/region/city/sensor-id\n
\n\n

You could then have different processing apps subscribe to

\n\n
England/#\nScotland/#\nUSA/Florida/#\nUSA/California/SanFransico/#\n
\n" }, { "Id": "5049", "CreationDate": "2020-05-04T19:24:29.530", "Body": "

I'm making a simple automation system for someone and require a MQTT admin panel for server/broker. I want it such that the admin who runs the server, is able to see and change the password of the server or client. Also the admin would be able to see the all the topics and remove the subscription of of any client if they want. Admin will also be able to see all the messages sent and received.

\n

I was currently looking at C# MQTTnet and Mosquitto but all the servers require manipulation in config files through CLI(there own set of commands on CLI). Isn't there something I can use so that all is done in clean code with WPF form for interface? Is there a solution for this problem? I want a user interface for admin.

\n

Also can a MQTT server run by a novice user? who has no technical knowledge?

\n

I am new so Stack Overflow sent me here to question.

\n", "Title": "Is there a complete admin panel/interface for MQTT net or or any other private MQTT Servers/Broker?", "Tags": "|mqtt|mosquitto|", "Answer": "

If this is purely user authentication and authorisation administration, then none of this should be done directly by the broker.

\n\n

The Username/Password information and ACL for topics is all held in an external database (using things like the mosquitto auth_plugin) and how you choose to update that DB is entirely up to you as it will totally depend on what other systems it needs to integrate with (e.g. existing staff/user lists).

\n\n

Any Administrator would never interrogate the broker for what topics a client is subscribed to (and definitely would not try to edit that list in the broker), but would set up the ACLs for that user to control what topics they are allowed to publish/subscribe to.

\n\n

And as for monitoring messages, this will very much depend on the volume os messages, but a broker will happily process more messages than it makes sense to try and visualise unfiltered. And the way to do this is just to have a client with the correct ACL entries to be able to see everything.

\n\n

As for a novice administrating the broker, there should be nothing for them to administrate except the users and what pre-defined ACL groups they should be part of. The ACL grouping should have been determined by the solution architect as part of the design and as I said earlier, this should be integrated into what ever systems already exist.

\n" }, { "Id": "5081", "CreationDate": "2020-05-30T16:06:15.673", "Body": "

I use a todo list app called TickTick. The app has a Google Assistant integration available. However, when I try to say \"Talk to TickTick\" Google Assistant always talks to another quiz app called \"Tik Tik\" instead.

\n\n

Is there some way I can fix Google Assistant to talk to TickTick todo list app instead of this other quiz app?

\n", "Title": "How can I get Google Assistant to talk to \"TickTick\" instead of \"Tik tik\"?", "Tags": "|google-home|google-assistant|", "Answer": "

Setup a \"Routine\" in the Google Assistant settings such that:

\n\n\n\n

Now, all I have to say is: \"Okay Google. Notepad\"

\n" }, { "Id": "5087", "CreationDate": "2020-06-04T08:38:50.507", "Body": "

I'm using Raspberry Pi 3 B+ with a SIM7600X 4G HAT for SMS and mobile internet connection.

\n\n

I want to use Paho MQTT library (it works fine), but it only works via Ethernet or Wifi.

\n\n

How can I use the Paho MQTT library from python to use the SIM7600X HAT mobile internet connection?

\n\n

I tried to use the SIM7600X AT+MQTT commands directly writing to the serial port for subscribing to a topic, and it works fine (so there is mobile internet through the 4G HAT), but after a while (1-2 hours) it loses connection all the time, and I can't release client and reconnect, that's why I thought I should use the Paho MQTT library, but it's not working without Ethernet or Wifi connection.

\n", "Title": "Python Paho MQTT doesn't work via LTE", "Tags": "|mqtt|raspberry-pi|paho|python|", "Answer": "

The Paho Python library will work just fine via LTE. The library is built to interact with the OS's TCP/IP stack. It has no knowledge of the underlying hardware of how that TCP/IP stack talks to the outside world.

\n\n

If you want it to work you need to have the LTE connection presented as a network device to the OS, not just a serial port.

\n\n

I suggest you go look at things like PPP and how to use it to \"dial up\" style connection to your LTE provider.

\n" }, { "Id": "5092", "CreationDate": "2020-06-04T23:20:04.170", "Body": "

When I think of the Internet of things, I think of fairly low bandwidth devices connected to the internet.

\n\n

That could be automotive telematics, medical devices, building monitoring, factory sensors, etc.

\n\n

However, I was just thinking of smartphones in terms of internet of things and had to correct myself... or did I?
\nCan a smartphone be regarded as part of a subset of the internet of things?

\n\n

Is there a good definition that something segregates connected computers from IoT and states what exactly IoT encompasses?

\n", "Title": "What exactly does the internet of things encompass?", "Tags": "|definitions|", "Answer": "

Yes, they are a part of the Internet of Things (IoT). Smartphones can be interconnected in the IoT with other objects. Besides, They are Smart Objects [1]. They can obtain data using their sensor, they can activate things or do actions, and they can process the information.

\n\n

Smart Object Definition [1]: Any electronic device that can be connected to the Internet and collect data, like a sensor, or perform an action in an object, normally called actuator.

\n\n

IoT Definition: 'The Internet of Things is the interconnection of heterogeneous objects through the Internet' [2].

\n\n

[1] https://www.researchgate.net/publication/307638707_A_review_about_Smart_Objects_Sensors_and_Actuators\n[2] https://www.researchgate.net/publication/260252345_Midgar_Generation_of_heterogeneous_objects_interconnecting_applications_A_Domain_Specific_Language_proposal_for_Internet_of_Things_scenarios

\n" }, { "Id": "5098", "CreationDate": "2020-06-09T06:59:23.500", "Body": "

TL;DR When trying to use some pins of my ESP32 to read analog signals, it turns out those pins have a non-zero voltage, messing up the measurements. Why?

\n\n

I got myself an Olimex ESP32-POE-ISO (see specs) to run the irrigation of my garden. I am attaching some hunter valves on GPIO0-5 and the plan was to hook up 3 moisture/temperature sensors (Truebner SMT50) to the pins on the other side of the module (see pinout).

\n\n

However, I ended up pulling my hair out. On some pins (e.g. GPIO14/ADC2_CH6, GPIO32/ADC1_CH4, GPIO33/ADC1_CH5, GPI35/ADC1_CH7) I get proper readings. I've tried both features (moisture and temperature) of each of the 3 sensors on those pins and the values I get look reasonable. So I am ruling out defective sensors.

\n\n

I have also tried GPIO13/ADC2_CH4, GPIO15/ADC2_CH3, GPI36/ADC1_CH0, GPIO0/ADC2_CH1 and GPIO2/ADC2_CH2, but I always get numbers that are way to high (raw values at 12bit between 2400-3400, corresponding to voltage 1.9V - 2.7V). And in fact, after disconnecting the sensor and measuring with a multimeter I could find that those pins actualy do have such a voltage (measured against the GND pin) while the \"good\" pins do not have that.

\n\n

The initialization code looks like this (channel.channel.adc1_id and ...adc2_id contains values like ADC1_CHANNEL_0, ...):

\n\n
void SensorService::init() {\n    ESP_LOGI(TAG, \"Initializing sensor service\");\n\n    adc1_config_width(ADC_WIDTH_BIT_12);\n    sensorToChannel = getChannelMapping();\n    for( const auto& [ idx, channel ] : sensorToChannel) {\n        switch (channel.unit) {\n            case ADC_UNIT_1:\n                adc1_config_channel_atten(channel.channel.adc1_id, ADC_ATTEN_11db);\n                break;\n            case ADC_UNIT_2:\n                adc2_config_channel_atten(channel.channel.adc2_id, ADC_ATTEN_11db);\n                break;\n            default:\n                ESP_LOGW(TAG, \"Invalid ADC unit requested\");\n                break;\n        };\n    }\n}\n
\n\n

The reading of raw values like this:

\n\n
std::optional<unsigned int> SensorService::getRawValue(unsigned int sensorIdx) {\n    ESP_LOGV(TAG, \"Getting raw value for sensor %d\", sensorIdx);\n\n    if (!this->isValidSensorIdx(sensorIdx)) {\n        ESP_LOGW(TAG, \"Requested non-existing sensor\");\n        return std::nullopt;\n    }\n\n    TargetChannel target = sensorToChannel.at(sensorIdx);\n    switch (target.unit) {\n        case ADC_UNIT_1:\n            return std::make_optional(adc1_get_raw(target.channel.adc1_id));\n        case ADC_UNIT_2:\n            int value;\n            adc2_get_raw(target.channel.adc2_id, ADC_WIDTH_BIT_12, &value);\n            return std::make_optional(value);\n        default:\n            ESP_LOGW(TAG, \"Invalid ADC unit requested\");\n            return std::nullopt;\n    }\n}\n
\n\n

And this works perfectly fine for some pins but not for others.

\n\n

I also tried a few things I could find in the docs to set the pin explicitly to INPUT and low. But it didn't change anything.

\n\n
    for (auto const& [ sensorIdx, pin ] : sensorPins) {\n        gpio_pad_select_gpio(pin);\n        gpio_set_direction(pin, GPIO_MODE_INPUT);\n        gpio_set_level(pin, 0);\n    }\n
\n\n

I am powering and connecting to the board via Ethernet/POE. I am not (knowingly) activating WIFI, RTC, hall sensor anywhere in the code. Not using SD card nor the flash memory for data storage. The values will only be polled via HTTP/Ethernet.

\n\n

So, my actual question here is, why do some pins (e.g. GPI36, explicitly documented as input-only pin) have non-zero voltage while others haven't? What am I missing?

\n", "Title": "ESP32 always high pins screwing analog measure", "Tags": "|sensors|hardware|esp32|microcontrollers|", "Answer": "

Please note that the SMT50 voltage outputs have an output resistance of 10 kOhm (see SMT50 datasheet\nSo if there is a pullup resistance to 3.3V then you will always get voltage levels which are too high. Good to know about the pullups since I plan to start with an ESP32 irrigation control project and SMT50. I will choose the pins without pullup.

\n" }, { "Id": "5105", "CreationDate": "2020-06-13T17:30:53.800", "Body": "

This thing is driving me nuts.

\n\n

I tried to debug my MQTT secure connection on nodecmu. Following some post from year 2015 I was switched on debugging output with Serial.setDebugOutput(true). Now I tried to switch it off with setDebugOutput(false) as well as disabled, chose selective output, you name it from the menus:

\n\n

\"arduino

\n\n

I tried to do

\n\n
void setup(void) {\n  Serial.begin(115200);\n  Serial.setDebugOutput(true);\n  Serial.setDebugOutput(false); \n
\n\n

I also tried

\n\n
void setup(void) {\n  Serial.begin(115200);\n
\n\n

It just keeps printing:

\n\n

\"debugging

\n\n

Posts from year 2015 show that this issue is addressed in some version of arduino/arduino GUI. Where should I look for the versions?

\n", "Title": "switching setDebugOutput off on esp8266", "Tags": "|esp8266|arduino|", "Answer": "

If you cannot disable the generation of the debug messages, then you could try to redirect the debug messages so that they are not visible on Serial.

\n

As a workaround, send the messages to Serial1 and keep the console attached to Serial.

\n

This may not be an optimum solution, because the messages are still being sent, which takes up some of the processing power and uses up a serial port.

\n

If that does not solve your logging problem, make sure that you also set Serial1 as your logging Serial from the menu:

\n

\"set

\n" }, { "Id": "5109", "CreationDate": "2020-06-18T13:42:53.487", "Body": "

I am looking for a way to connect to my ESP8266's web server without being connected to the same network as it. Also, I want to be able to connect to it without using the port forward or VPN tunnel option on my router.\nIf there are other options to send commands on a user interface other than the ESP8266's web server, I'm open to tips on that.

\n

I know this can be done with certain devices (although I'm not sure about the ESP8266), but have no idea how it's done. For example, to connect to a smart thermostat like NEST there is no need to Port Forward or use a VPN to be able to send commands to it.

\n

Any help would be greatly appreciated.

\n", "Title": "Connect to ESP8266 outside network, not using portforward or VPN", "Tags": "|esp8266|", "Answer": "

Assuming all this is on a "normal" home broadband network with a dynamic IPv4 address and a router operating as a NAT gateway.

\n

The devices you are talking about do not operate as HTTP servers. They use protocols where the device connects out to a known public source e.g. a MQTT broker hosted in the cloud.

\n

Messages for the device are sent to the cloud broker and then forwarded to the right device.

\n

Because the devices connect out and keep the connection alive there is no need to do any port forwarding or use a VPN.

\n" }, { "Id": "5111", "CreationDate": "2020-06-19T10:56:39.773", "Body": "

My SmartMi Fan 2S arrived today, I've finished the initial device setup in Mi Home app. The process of pairing was slow, but in the end, it finished successfully.

\n

And now 95% of the time I see the Fan as "Device Offline" in Mi Home dashboard. Sporadically, it becomes available for few seconds and I can even start firmware update, but the update process fails, it never ends successfully (now stuck at 15%).

\n

It's a chinese version of the device. I use the latest Mi Home Android app available in Google Play. I choose Chinese Mainland region.

\n

I tried to connect the Fan to OpenHab. In Paper UI, I see how device status changes from OFFLINE to OFFLINE COMMUNICATION ERROR to ONLINE and then to OFFLINE - CONFIGURATION_ERROR. If I'm lucky, when it's ONLINE I can toggle ON/OFF switch and it works (with a minute-long delay).

\n

I'm unhappy with the purchase. I can't use smart features. I can't update the firmware.

\n

How to fix the connection issue?

\n

debug details:

\n
hardwareVersion        esp32\nmcuFirmware            0008\nmodelId                zhimi.fan.za4\nvendor                 Xiaomi\nwifiFirmware           v3.1.3-8-gce4d3fe10\nCurrent firmware:      2.0.3\n
\n
Phone: Google Pixel 2\nAndroid 10\nMi Home app version 5.6.99 (the latest)\nRouter Huawei HG8245\n
\n

What have I already tried?

\n\n

Screenshots:\n\"enter

\n

\"enter

\n

\"enter

\n", "Title": "Xiaomi SmartMi Fan 2S problems - Wi-Fi MiHome app not connecting", "Tags": "|openhab|xiaomi-mi|", "Answer": "

The problem is somehow related to my Wi-Fi hotspot (or Huawei router).

\n

Working solution:

\n\n

Enjoy!

\n" }, { "Id": "5157", "CreationDate": "2020-07-15T12:36:33.687", "Body": "

I am using eclipse-mosquitto:1.6.9 and would like to remap incoming messages to topic /registration_app to /$share/registration_app.

\n

In this case IoT devices will be publishing registration messages to /registration_app & backend ap will be listening and processing each request. To scale backend process horizontally, I want to switch to /$share/registration_app (MQTT V5.0) for backend process, but don't want to change original incoming message topic /registration_app.

\n

SO far I can see remapping is available in case of bridging. So would like to know if I can do remapping without bridge.

\n", "Title": "mosquitto remap local topic (no bridge)", "Tags": "|mqtt|mosquitto|", "Answer": "

First, it is really bad practice to start topics with leading /. It adds a null entry to the start of the topic tree and causes problems with things like shared subscriptions (we will get to that next).

\n

Second, I think you have miss understood how shared subscriptions work. Topics that start with $share/ (note the lack of leading /) are used to set up shared subscription groups, so groups of clients can load balance the consuming of messages published to a give topic pattern. You don't do any topic remapping yourself.

\n

To set up a shared subscription group you subscribe to a topic as follows:

\n
$share/<groupname>/<topic>\n
\n

So in your case assuming a group called backend and messages published to 'registration_app' (again note the lack of leading /) it would be

\n
$share/backend/registration_app\n
\n

If you use a leading / on the original topic you have to insert a double / in the shared subscription topic to insert the null, so just don't do it.

\n
$share/backend//registration_app\n
\n" }, { "Id": "5177", "CreationDate": "2020-07-24T14:42:34.303", "Body": "

I am trying to send messages from my Arduino to my Raspberry pi. But I don't understand why the pi isn't receiving the messages.

\n

On the Arduino side I'm using an Arduino UNO with a Dragino Lora/GPS shield, using the Radiohead library.

\n

On the Pi side, the Upotronics LoRa hat for the pi zero and a pi zero, which is based on a RFM95 chip. And on the software side I'm using the Raspi Lora library.

\n

I wrote a couple of simple programs just to test the connection, and I can send messages from the raspberry to the Arduino, but not the other way around.

\n

Am I making a mistake in the code on either sending the data from the Arduino or receiving it on the raspberry that prevents this?

\n

Code for the Arduino: This one sends data it gets from serial port and awaits radio input when nothing is available.

\n
#include <SPI.h>\n#include <RH_RF95.h>\n\nRH_RF95 rf95;\n\nvoid setup(){\n  Serial.begin(9600);\n  if (rf95.init()){\n    Serial.println("Init Success");\n  } else {\n    Serial.println("Init Failed");\n  }\n  if (rf95.setFrequency(868)) Serial.println("Freq set for 868Mhz");\n  if (!rf95.setModemConfig(RH_RF95::Bw125Cr45Sf128)) Serial.println("Invalid modem");\n\n  Serial.println("Send serial data to echo it through radio");\n}\n\nvoid loop(){\n  uint8_t data[100];\n  uint8_t len;\n  if (Serial.available()){\n    delay(20);\n    int i = 0;\n    while (Serial.available() && i < sizeof(data)-1){\n      data[i] = Serial.read();\n      i++;\n      data[i] = 0;\n    }\n    rf95.setModeTx();\n    if(rf95.send(data,i))\n    {\n      Serial.print("Message sent: ");\n      Serial.println((char *) data);\n      Serial.println(i);\n    } else {\n      Serial.println("Failure to send");\n    }\n  }\n\n  if (rf95.available())\n  {\n    if (rf95.recv((uint8_t *)data,&len)){\n      Serial.println("Got it");\n      Serial.println((char *)data);\n      Serial.println(len);\n    }\n  }\n}\n
\n

Raspberry pi code: This one just initializes the lora instance, I run it on the interactive console and in theory it should print data, but as much as I tried I haven't managed to catch an IRQ on any of the pins when I send a message.

\n
from raspi_lora import LoRa, ModemConfig\n\n# This is our callback function that runs when a message is received\ndef on_recv(payload):\n    print("From:", payload.header_from)\n    print("Received:", payload.message)\n    print("RSSI: {}; SNR: {}".format(payload.rssi, payload.snr))\n\n\nlora = LoRa(0, 25, 2,freq=868, receive_all=True, modem_config=ModemConfig.Bw125Cr45Sf128)\nlora.on_recv = on_recv\n\nlora.set_mode_rx()\n
\n", "Title": "Cannnot send from Arduino to Raspberry pi via LoRa", "Tags": "|raspberry-pi|arduino|lora|", "Answer": "

I figured it out.\nTurns out there was a bug in the raspi_lora library I used for my python code.\nIt is so that, if you are not specifically sending to the device address or have receive_all=True, it will do nothing with the messages.

\n

If you plan to use the raspi_lora library you should replace line 268 in the lora.py file with

\n
if (self._this_address != header_to) and ((header_to != BROADCAST_ADDRESS) or (self._receive_all is False)):\n
\n" }, { "Id": "5180", "CreationDate": "2020-07-25T18:56:18.687", "Body": "

I have been reading about the UUID on the HM-10 and multiple sources say that "The main part of the user UUID service (FFE0) and the main part of the custom characteristic can be changed using the AT commands. You can also add another characteristic." but I can't find any information or AT commands for this.

\n", "Title": "How to change/add characteristics on HM-10?", "Tags": "|bluetooth|", "Answer": "

In the AT commands list for the HM-10 you\u2019ll find commands AT+UUID which allows changing the service UUID and AT+FFE2 which enables control of the second service/characteristic.

\n

Note that features may depend on the firmware version on your module, and that it\u2019s definitely not very flexible.

\n" }, { "Id": "5194", "CreationDate": "2020-08-07T23:06:48.423", "Body": "

Found from the bottom of my gadget box, I am trying to use this board for my hobby project, but I cannot even remember what model it is.

\n

Could anyone help please to identify the model of this board?\nAlso, where to download its driver as I suppose that I will need the driver to use this board.\n\"board\"

\n

So far I know it's from GHI Electronics, brand is Canxtra, I searched "Canxtra Rev 2.1A" but nothing returned.

\n", "Title": "Identify the ancient IOT hardware model and find out its driver", "Tags": "|arduino|hardware|", "Answer": "

It is a programmable OBD-II tool.

\n

https://docs.ghielectronics.com/hardware/automotive.html

\n

\"enter

\n" }, { "Id": "5198", "CreationDate": "2020-08-08T15:36:14.080", "Body": "

I am creating a "digital dashboard", consisting of a TV and a Raspberry Pi 3B+. The TV just shows calendar, time, weather, etc..

\n

In the tutorial I am following for that, the creator uses a cron job to turn the TV on and off via CEC. However, he does this at a specific timeframe.

\n

What I would like to achieve here, is that whenever one of two phones (or both) - the ones of my girlfriend and mine - are logged into the router, the TV is on, if not, the TV is off. So, to recap, whenever nobody is at home, the TV shouldn't be turned on / should be turned off. Also, at the night, lets say 11PM to 6AM, the TV shouldn't be on either, respectively, it should be automatically turned OFF at 11PM.

\n

My router is a Fritz!Box 7520, just to mention that as well.

\n

What do you think? Is this even doable? Or am I gonna run in huge effort with high costs for this?

\n

Thanks in advance!

\n", "Title": "Control Raspberry Pi depending on Wifi-Users", "Tags": "|smart-home|raspberry-pi|wifi|hardware|software|", "Answer": "

I got it working!

\n

It was pretty easy when you got the right idea.

\n

I use the IP's of the phones. I configured them to be static in my network.

\n

Then, I wrote a little Python script that pings both IP-addresses. If one of the two is online, TV should be turned on. If both are offline, TV should be turned off.

\n" }, { "Id": "5210", "CreationDate": "2020-08-17T11:30:48.967", "Body": "

Are there any KNX gateways that work with Apple HomeKit? I'm currently using the BAB Technologie EIBPORT V3 and have to speak to it via CUBEVISION 2 which works very poorly. I would love to find a way to simply control my KNX devices (primarily lighting) using HomeKit.

\n", "Title": "Controlling a KNX gateway with Apple HomeKit", "Tags": "|apple-homekit|knx|", "Answer": "

Although I don't have any personal experience with it, I found a Youtube video from ThinKNX which suggests that it would allow you to control KNX lights from Apple Homekit...

\n

They outline here how to integrate ThinKNX with Apple Homekit.

\n

Again, I'm afraid I'm not an expert, but I hope this could point you in the right direction!

\n" }, { "Id": "5231", "CreationDate": "2020-08-30T13:52:58.967", "Body": "

I am searching for a way to make my home "smart". I have raspberry pi 4 and pixel 2 phone. I thought that it would be cool to make my dumb devices "smart" and control them via Google Assistant from Pixel phone.

\n

I stumbled upon this picture (source) - it describes request lifecycle, and I am wondering if it is possible to use my Raspberry Pi 4 instead of Google Nest/Smart Speaker (which is located near "JS" thing).

\n

\"\"
\n(source: google.com)

\n

I am a software developer and I can write any server code is needed for this, but I can't find any docs yet.

\n", "Title": "Can Raspberry Pi be used instead of Google Nest as a hub to control my smart home via Google Assistant?", "Tags": "|smart-home|raspberry-pi|google-assistant|", "Answer": "

That diagram is describing Google Assistant's Local Control SDK.

\n

When using the Local Control SDK you write some JavaScript that is executed on the a Google Home/Home Mini/Nest Hub to send control messages over the local network to the device you want to control. (You still need to have a full cloud setup for the local control to work as well, docs for how to write a full Smart Home Action are here)

\n

You can build your own (for none commercial use) Smart Speaker using the Google Assistant Service library which will run on a Raspberry Pi. I do not think this supports the Local control.

\n

Now if you want to use your phone to issue commands to Google Assistant and then just use the Pi to control the devices there are a few options.

\n

One of them is to install Node-RED on the Pi and use a service like Node-RED Google Assistant Bridge (full disclosure I run this service). This lets you define virtual device that will be added to your Google Assistant and you can then connect those to what ever devices you want.

\n" }, { "Id": "5238", "CreationDate": "2020-09-03T10:20:37.327", "Body": "

Summary

\n

I would like to build (ideally based on ESPHome) a device with a buzzer I can trigger remotely via the network.

\n

Context

\n

I have several systems built around my home automation system:

\n\n

I am listing all this to show that I have, so far, two major kind of operations:

\n\n

What I am missing

\n

I now would like to build an IoT device that would accept data from my Wi-Fi network and trigger a module attached to it. You can see this as some kind of poor man alarm clock - where all the logic of the alarm is offloaded to a service, and the device just receives the order to buzz.

\n

What is the right approach to build such an IoT?

\n

I have NodeMCU modules, or Wemo D1s. I could flash them with ESPHome, bringing in the WiFi communication and the ability to connect to GPIOs.

\n

What I do not understand is how the Wi-Fi stack interacts with the GPIOs, exactly. Do I need to write a specific module to be added during the compilation? (it's been 20 years I did not code in C, last time was for my PhD - but this is something I could get into). Or is there a module that does the bridge already?

\n

Generally speaking, what is the approach when I want to send a message to a ESPHome, Wi-Fi enabled device in order to access its GPIOs.

\n

Please note that I know how to do it the other way round: I have added existing ESPHome modules to a Weemo D1 and they are correctly exposed in Home Assistant or the built-in web server. But this is a case where such modules already exist (for specific hardware) and just send data out.

\n", "Title": "How to control an ESP8266 via the Wi-Fi?", "Tags": "|esp8266|wifi|hardware|esp32|", "Answer": "

If you are looking for something custom on the ESP8266 end, you a likely going to want to write a program to do what you want it to do. If you are already using MQTT, then you might consider using MicroPython on the NodeMCU. The toolchain is a bit simpler, and the development iterations are a lot quicker than flashing a C program.

\n" }, { "Id": "5266", "CreationDate": "2020-09-26T13:21:37.740", "Body": "

I'm new to AWS services, I'm still studying the docs. I received a quite long (working) Python code that exchange data with the Cloud.

\n

Now I want to pub/sub messages with mosquitto.\nBasically I'm trying the following:

\n
mosquitto_sub -h <id>.iot.us-east-2.amazonaws.com -p 443 -t "$aws/things/<sn>/shadow/update/delta" --cafile ./root-CA.crt --cert ./<sn>.cert.pem --key ./<sn>.private.key -d -i <sn> \n
\n

Where:

\n\n

That command leads to:

\n
\n

Client sending CONNECT

\n
\n

and nothing else.\nI found a policy document inside a Python script (to be used when creating the device):

\n
{\n  "Version": "2012-10-17",\n  "Statement": [\n    {\n      "Effect": "Allow",\n      "Action": [\n        "iot:Connect"\n      ],\n      "Resource": [\n        "arn:aws:iot:us-east-2:<id>:client/<sn>"\n      ]\n    },\n    {\n      "Effect": "Allow",\n      "Action": [\n        "iot:Publish"\n      ],\n      "Resource": [\n        "arn:aws:iot:us-east-2:<id>:topic/$aws/things/<sn>/shadow/update",\n        "arn:aws:iot:us-east-2:<id>:topic/IoTData"\n      ]\n    },\n    {\n      "Effect": "Allow",\n      "Action": [\n        "iot:Subscribe",\n        "iot:Receive"\n      ],\n      "Resource": [\n        "arn:aws:iot:us-east-2:<id>:topicfilter/$aws/things/<sn>/shadow/update/accepted",\n        "arn:aws:iot:us-east-2:<id>:topicfilter/$aws/things/<sn>/shadow/update/rejected",\n        "arn:aws:iot:us-east-2:<id>:topicfilter/$aws/things/<sn>/shadow/update/delta"\n      ]\n    }\n  ]\n}\n
\n

But I'm not sure if this is "attached" to the certificates, and even reading the docs I'm not sure if the CLI command is referred to my target's console (an RPi).

\n

UPDATE

\n

From the AWS console I create a new certificate and downloaded the three files to the target: <xxx>-certificate.pem.crt, <xxx>-private.pem.key, <xxx>-public.pem.key. Then I attached the policy to this certificated (from the AWS console itself).

\n

Still the connection is not completed and no answer is received.

\n", "Title": "Mosquitto does not connect to AWS", "Tags": "|mqtt|mosquitto|aws|", "Answer": "

I had to attach the policy to the certificate and set the port to 8883 because the protocol is MQTT and not MQTT over Websockets.

\n" }, { "Id": "5267", "CreationDate": "2020-09-27T16:11:03.643", "Body": "

I have some WIFI bulb from Xiaomi/Philips. It is controlled using Mi Home app. I can control it (turn on/off) both when I am in the same network as the bulb or when I'm in a completely different place, in a different network.

\n

How is it possible?

\n

I understand that while being in the same wifi network, my phone is able to talk directly to the bulb (although I do not know if this is what happens in reality). However, when I'm in a different network, how does that work?

\n

I assume that Mi Home does not actually talk with the bulb directly. I believe it communicates with some cloud server that actually communicates with the bulb. However, how does such server (in the cloud) communicate with my bulb in my local (NATted) network? I do not have any port forwarding set up on my router for my bulb.

\n

The only way I see it possible is if it is the bulb that checks if there are any commands for it in the cloud by invoking some API on schedule (every few seconds?) - some form of HTTP polling. I don't like this idea, because that would mean that my network would be very "crowded" if I had a few of these bulbs.

\n

So, how am I able to control my bulb from another network?

\n", "Title": "How IoT device in my local network is controlled from other networks?", "Tags": "|networking|routers|xiaomi-mi|", "Answer": "

All your devices are opening persistent connections out to the cloud. They will be using a messaging based protocol (e.g. MQTT) rather than a request/response protocol like HTTP.

\n

Messages flow both ways over these types of protocols, with the bulbs updating the cloud with their current state and the cloud sending commands to change state.

\n" }, { "Id": "5270", "CreationDate": "2020-09-28T04:54:25.613", "Body": "

How can I override the wlan driver's region limitations to permit the (legal) 900MHz unlicensed operation of my 802.11ah evaluation kit?

\n

The [Silex evaluation kit for 802.11ah][1] doesn't seem to work at sub-GHz. A spectrum analyzer shows no power in the sub-GHz. It is operating, apparently, at 5.825GHz. Their support email has failed to respond after 11 days and this appears to be a problem with the SSD that comes with the kit as evidenced by the following output.

\n
pi@raspberrypi:~ $ iwconfig wlan0\nwlan0     IEEE 802.11  ESSID:"Wi-Fi"  \n          Mode:Managed  Frequency:5.825 GHz  Access Point: 84:25:3F:87:FA:E8  \n          Bit Rate=6 Mb/s   Tx-Power=30 dBm  \n          Retry short limit:7   RTS thr:off   Fragment thr:off\n          Power Management:off\n          Link Quality=70/70  Signal level=-32 dBm  \n          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0\n          Tx excessive retries:0  Invalid misc:100   Missed beacon:0\n\npi@raspberrypi:~ $ cat ~/sx-newah/conf/US/sta_halow_sae.conf\nctrl_interface=/var/run/wpa_supplicant\n\ncountry=US\nnetwork={\n    ssid="Wi-Fi"\n    proto=RSN\n    key_mgmt=SAE\n    ieee80211w=2\n    pairwise=CCMP\n    group=CCMP\n    sae_password="12345678"\n    freq_list=2422 2432 2442 2452 2462 5180 5185 5190 5195 5200 5205 5210 5215 5220 5225 5230 5235 5240 5745 5750 5755 5760 5500 5520 2437 2457 5765 5770 5775 5780 5785 5790 5795 5800 5805 5810 5815 5820 5825\n    scan_freq=2422 2432 2442 2452 2462 5180 5185 5190 5195 5200 5205 5210 5215 5220 5225 5230 5235 5240 5745 5750 5755 5760 5500 5520 2437 2457 5765 5770 5775 5780 5785 5790 5795 5800 5805 5810 5815 5820 5825\n}\np2p_disabled=1\nignore_old_scan_res=1\n\npi@raspberrypi:~ $ iw reg get\nglobal\ncountry US: DFS-FCC\n(2402 - 2472 @ 40), (N/A, 30), (N/A)\n(5170 - 5250 @ 80), (N/A, 23), (N/A), AUTO-BW\n(5250 - 5330 @ 80), (N/A, 23), (0 ms), DFS, AUTO-BW\n(5490 - 5730 @ 160), (N/A, 23), (0 ms), DFS\n(5735 - 5835 @ 80), (N/A, 30), (N/A)\n(57240 - 63720 @ 2160), (N/A, 40), (N/A)\n
\n

PS: I've tried modifying the conf files.
\n[1]: https://www.silextechnology.com/connectivity-solutions/embedded-wireless/sx-newah-evaluation

\n", "Title": "802.11ah Eval Kit Not Operating In 900MHz Spectrum?", "Tags": "|wifi-halow|drivers|", "Answer": "

Partially my mistake. It is, indeed, operating in the 900MHz spectrum.

\n

What threw me off was the agreement between all of the command outputs provided in the question, that there was no support for 900MHz, in combination with a mistake of my own:

\n

I modified the "freq_list" (and scan_freq) to include the 900MHz frequencies. This caused the driver to fail to broadcast at all. I didn't realize this, so upon taking delivery of the spectrum analyzer nothing was showing up in 900MHz thereby "confirming" my belief the unit was operating at 5GHz (the spectrum analyzer I bought only went up to the 2.4GHz range). Only after realizing that the radio might not be broadcasting at all did I revert the configuration to the original. At that point the spectrum analyzer showed 928MHz power.

\n" }, { "Id": "5273", "CreationDate": "2020-09-28T14:52:32.550", "Body": "

I'm trying to create a Garage door sensor / motor controller and connect it to my instance of Home Assistant. My broker is the local mosquito on the Raspberry pi.

\n

The device is a Wemos D1 mini with a relay connected to D1 pin and an analog input to determine the current state of the door.

\n

The problem is that the device keeps coming up as unavailable as soon as the MQTT connection is made to the broker. I can't see anything in the logs to suggest a problem. On top of that I can still successfully publish to and from the device while it's offline.

\n

This is the specific code I'm using for the LWT / availability topic.

\n\n
String clientId = "GarageDoorSensor-" + String(random(0xffff), HEX);\nif (client.connect(clientId.c_str(), mqtt_user, mqtt_password, availabilityTopic, 0, true, payloadNotAvailable)) {\n  Serial.println("connected");\n  client.publish(availabilityTopic, payloadAvailable, true);  \n          \n  client.subscribe(commandTopic);\n          \n  Serial.println("Subscribed to: ");\n  Serial.println(commandTopic);\n  Serial.println(availabilityTopic);\n   \n  getAndSendDoorStatus();\n}\n
\n

Here is the full code:

\n
#include <ESP8266WiFi.h>\n#include <PubSubClient.h>\n#include <ArduinoJson.h>\n#include "My_Helper.h"\n\nWiFiClient espClient;\nPubSubClient client(espClient);\n\nvoid setup_wifi() {\n\n  delay(10);\n  // We start by connecting to a WiFi network\n  Serial.println();\n  Serial.print("Connecting to ");\n  Serial.println(ssid);\n\n  //Set WiFi mode so we don't create an access point.\n  WiFi.mode(WIFI_STA);\n  WiFi.begin(ssid, password);\n  \n  Serial.println(WiFi.mode(WIFI_STA));\n  \n  while (WiFi.status() != WL_CONNECTED) {\n    delay(500);\n    Serial.print(".");\n  }\n  \n  randomSeed(micros());\n  \n  Serial.println("");\n  Serial.println("WiFi connected");\n  Serial.println("IP address: ");\n  Serial.println(WiFi.localIP());\n}\n\nvoid reconnect() {\n  // Loop until we're reconnected\n  while (!client.connected()) {\n    Serial.print("Attempting MQTT connection...");\n    // Attempt to connect\n    String clientId = "GarageDoorSensor-" + String(random(0xffff), HEX);\n    if (client.connect(clientId.c_str(), mqtt_user, mqtt_password, availabilityTopic, 0, true, payloadNotAvailable)) {\n      Serial.println("connected");\n      client.publish(availabilityTopic, payloadAvailable, true);  \n      \n      client.subscribe(commandTopic);\n      \n      Serial.println("Subscribed to: ");\n      Serial.println(commandTopic);\n      Serial.println(availabilityTopic);\n\n      getAndSendDoorStatus();\n    } else {\n      Serial.print("failed, rc=");\n      Serial.print(client.state());\n      Serial.println(" try again in 5 seconds");\n      // Wait 5 seconds before retrying\n      delay(5000);\n    }\n  }\n}\n\n\nvoid setup() {\n\n  pinMode(relaySwitch, OUTPUT);\n\n  pinMode(doorInput, INPUT);\n  \n  Serial.begin(115200);\n  // put your setup code here, to run once:\n  setup_wifi();\n  client.setServer(mqtt_server, 1883);\n  client.setCallback(callback);\n  while (!client.connected()) {\n    reconnect();\n  }\n\n  StaticJsonDocument<1024> mqttConfig;\n  mqttConfig["name"] = mqttDeviceName;\n  mqttConfig["device_class"] = mqttDeviceClass; \n  mqttConfig["state_topic"] = stateTopic;\n  mqttConfig["command_topic"] = commandTopic; \n  mqttConfig["state_open"] = opened;\n  mqttConfig["state_closed"] = closed;\n  mqttConfig["state_closing"] = closing;\n  mqttConfig["state_opening"] = opening;\n  mqttConfig["payload_open"] = payloadOpen;\n  mqttConfig["payload_close"] = payloadClose;\n  mqttConfig["payload_stop"] = payloadStop;\n  mqttConfig["optimistic"] = false;\n  mqttConfig["retain"] = true;\n  mqttConfig["availability_topic"] = availabilityTopic;\n  mqttConfig["payload_available"] = payloadAvailable;\n  mqttConfig["payload_not_available"] = payloadNotAvailable;\n  char json[1024];\n  serializeJsonPretty(mqttConfig, json);\n  client.publish(configTopic, json, true); \n}\n\nvoid loop() {\n  if (!client.connected()) {\n    reconnect();\n  }\n  client.loop();\n  \n  delay(1000);\n  getAndSendDoorStatus();\n}\n\nvoid callback(char* topic, byte* message, unsigned int length) {\n\n  String messageStr;\n  \n  for (int i = 0; i < length; i++) {\n    messageStr += (char)message[i];\n  }\n  \n  if (String(topic) == commandTopic) {\n     Serial.print("Home Assistant Command: ");\n     Serial.println(messageStr);\n\n     if((messageStr == payloadOpen && doorStatus != opened) || messageStr == "forceOpen"){\n        //open door is not already open or we are running a test.\n        openTheDoor();\n     }else if((messageStr == payloadClose && doorStatus != closed) || messageStr == "forceClose"){\n        //close door is not already closed or we are running a test\n        closeTheDoor(); \n     }else if(messageStr == payloadStop){\n\n        //make sure we undo the relevant switches to stop the motion based on the previous status\n        if(doorStatus == opened){\n          closeTheDoor();\n        }else if(doorStatus == closed){\n          openTheDoor();\n        }\n    }\n\n    prevDoorStatus = doorStatus;\n\n    if(messageStr == "incrementOpenThreshold"){\n      openThreshold = openThreshold + 1;\n      String msg = "Set open threshold to: " + openThreshold;\n      client.publish(stateTopic, msg.c_str(), true); \n    }\n\n    if(messageStr == "decrementOpenThreshold"){\n      openThreshold = openThreshold - 1;\n      String msg = "Set open threshold to: " + openThreshold;\n      client.publish(stateTopic, msg.c_str(), true); \n    }\n\n  }\n}\n\nvoid getAndSendDoorStatus(){\n  int doorState = analogRead(doorInput);\n\n  //Door fully open?\n  if(doorState < openThreshold){\n    statusToOpen();\n  } else {\n    statusToClosed();\n  }\n \n  if(prevDoorStatus != doorStatus){\n    Serial.print("Door ");\n    Serial.println(doorStatus);\n    client.publish(stateTopic, doorStatus, true); \n\n    delay(500);\n    if(doorStatus == opened){\n       openTheDoor(); \n    } else {\n       closeTheDoor();\n    }\n    \n    prevDoorStatus = doorStatus;\n  } \n  \n}\n\nvoid closeTheDoor(){\n  digitalWrite(relaySwitch, closeDoor);\n}\n\nvoid statusToClosed(){\n  doorStatus = closed;\n}\n\nvoid openTheDoor(){\n  digitalWrite(relaySwitch, openDoor);\n}\n\nvoid statusToOpen(){\n  doorStatus = opened;  \n}\n
\n

MY_HELPER.H

\n
#ifndef MY_HELPER_H\n#define MY_HELPER_H\n\nconst char* ssid = "YOUR_WIFI_SSID";\nconst char* password = "YOUR_WIFI_PASSWORD";\nconst char* mqtt_user = "YOUR_MQTT_USER";\nconst char* mqtt_password = "YOUR_MQTT_PASSWORD";\nconst char* mqtt_server = "YOUR_MQTT_SERVER";\nconst char* stateTopic = "homeassistant/cover/garage/door/state";\nconst char* commandTopic = "homeassistant/cover/garage/door/set";\nconst char* configTopic = "homeassistant/cover/garage/door/config";\nconst char* availabilityTopic = "homeassistant/cover/garage/door/availability";\nconst char* doorStatus = "";\nconst char* prevDoorStatus = "";\n\nconst char* mqttDeviceName = "Garage Door";\nconst char* mqttDeviceClass = "garage";\n\nint prevDoorState;\nint openThreshold = 15;\n\nconst int relaySwitch = D1;\n\nconst int doorInput = A0;\n\nconst int conexT1 = LOW;\nconst int conexT2 = HIGH;\n\nconst int openDoor = conexT1;\nconst int closeDoor = conexT2;\n\n//Statuses\nconst char* opened = "open";\nconst char* closed = "closed";\nconst char* closing = "closing";\nconst char* opening = "opening";\nconst char* stopped = "stopped";\n\n//pay loads\nconst char* payloadOpen = "OPEN";\nconst char* payloadClose = "CLOSE";\nconst char* payloadStop = "STOP";\nconst char* payloadAvailable = "online";\nconst char* payloadNotAvailable = "offline";\n\n#endif\n
\n", "Title": "MQTT device shows as offline in Home Assistant when connection is established", "Tags": "|mqtt|arduino|mosquitto|home-assistant|", "Answer": "

First, if you just have 1 garage door opener, why are you generating a random client id every time? using a fixed clientID would be the right thing to do here.

\n

The only time a random clientID makes any sense is when you are using a transient client e.g. a webpage using MQTT over Websockets.

\n

Second, you appear to be double looping your reconnect function. Once in setup() and in loop() then again in reconnect(). You should only need to loop once on the condition client.connected() probably in the reconnect() function.

\n

Turn on verbose logging on the broker and look to see what broker thinks is happening when it publishes the LWT.

\n

Also following along with mosquitto_sub will be useful at the same time.

\n" }, { "Id": "5286", "CreationDate": "2020-10-07T11:36:09.373", "Body": "

I'm trying to build my own motorised roller blinds based off this project. It uses the Stepper_28BYJ_48 library.

\n

The problem I'm having is that the motor is either very slow and jittery, going forward 1 step at a time with a slight pause, or it overheats, i'm assuming, and restarts the device (Node MCU) with this motor setup.

\n

I'm especially confused as I can load the motor-on-roller-blind-ws onto my setup and it will turn the motor smoothly each time.

\n

I'm implementing the same code r.e. the motor functionality from what i can see.

\n

I initiate the motor up top:

\n

Stepper_28BYJ_48 small_stepper(D1, D3, D2, D4); //Initiate stepper driver

\n

then in loop I call the step method with the appropriate direction.

\n

Full code is here:

\n
#include <Stepper_28BYJ_48.h>\n#include <ESP8266WiFi.h>\n#include <PubSubClient.h>\n#include <ArduinoOTA.h>\n#include "My_Helper.h"\n#include "ConfigHelper.h"\n\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n//ALL CONFIG CHANGES ARE LOCATED IN My_Helper.h//\n\nConfigHelper helper = ConfigHelper();\n\nWiFiClient espClient;\nPubSubClient client(espClient);\nStepper_28BYJ_48 small_stepper(D1, D3, D2, D4); //Initiate stepper driver\nJsonObject jsonConfig;\n\nvoid setup() {\n  // put your setup code here, to run once:\n  Serial.begin(115200);\n  stopPowerToCoils();\n\n  if(!SPIFFS.begin()){\n      Serial.println("An Error has occurred while mounting SPIFFS");\n      client.publish(coverDebugTopic, "Critical Error!", false);\n      return;\n  }\n\n  if(helper.loadconfig()){\n  \n    jsonConfig = helper.getconfig();\n\n    currentPosition = jsonConfig["current"];\n    minPosition = jsonConfig["min"];\n    maxPosition = jsonConfig["max"];\n\n  } else {\n    client.publish(coverDebugTopic, "No config found, using default configuration", false);\n  }\n  \n  setup_wifi();\n  client.setServer(mqtt_server, 1883);\n  client.setCallback(callback);\n  client.setBufferSize(1024);\n  \n  if (connectClient()) {\n    //Send cover entity details to home assistant on initial connection\n    //for auto discovery\n\n    DynamicJsonDocument mqttDevConfig(225);\n    mqttDevConfig["name"] = mqttCoverDeviceClientId;\n    mqttDevConfig["mf"] = manufacturer;\n    mqttDevConfig["mdl"] = model;\n    mqttDevConfig["sw"] = softwareVersion;\n    mqttDevConfig["ids"][0] = mqttCoverDeviceClientId;\n    mqttDevConfig["ids"][1] = mqttResetDeviceClientId;\n    mqttDevConfig["ids"][2] = mqttMinDeviceClientId;\n    mqttDevConfig["ids"][3] = mqttMaxDeviceClientId;\n    \n    DynamicJsonDocument mqttCoverConfig(540);\n    mqttCoverConfig["name"] = mqttCoverDeviceName;\n    mqttCoverConfig["dev_cla"] = mqttCoverDeviceClass;\n    mqttCoverConfig["stat_t"] = coverStateTopic;\n    mqttCoverConfig["cmd_t"] = coverCommandTopic;\n    mqttCoverConfig["opt"] = false;\n    mqttCoverConfig["ret"] = true;\n    mqttCoverConfig["avty_t"] = coverAvailabilityTopic;\n    mqttCoverConfig["uniq_id"] = mqttCoverDeviceClientId;\n    mqttCoverConfig["dev"] = mqttDevConfig;\n\n    char coverJson[540];\n    serializeJsonPretty(mqttCoverConfig, coverJson);\n    client.publish(coverConfigTopic, coverJson, false);\n    \n    DynamicJsonDocument mqttResetConfig(505);\n    mqttResetConfig["name"] = mqttResetDeviceName;\n    mqttResetConfig["ic"] = "mdi:lock-reset";\n    mqttResetConfig["cmd_t"] = resetCommandTopic;\n    mqttResetConfig["stat_t"] = resetStateTopic;\n    mqttResetConfig["avty_t"] = coverAvailabilityTopic;\n    mqttResetConfig["uniq_id"] = mqttResetDeviceClientId;\n    mqttResetConfig["dev"] = mqttDevConfig;\n\n    char resetJson[505];\n    serializeJsonPretty(mqttResetConfig, resetJson);\n    client.publish(resetConfigTopic, resetJson, false);\n\n    if(minPosition == -1){\n      DynamicJsonDocument mqttMinConfig(515);\n      mqttMinConfig["name"] = mqttMinDeviceName;\n      mqttMinConfig["ic"] = "mdi:blinds-open";\n      mqttMinConfig["cmd_t"] = minCommandTopic;\n      mqttMinConfig["stat_t"] = minStateTopic;\n      mqttMinConfig["avty_t"] = coverAvailabilityTopic;\n      mqttMinConfig["uniq_id"] = mqttMinDeviceClientId;\n      mqttMinConfig["dev"] = mqttDevConfig;\n  \n      char minJson[515];\n      serializeJsonPretty(mqttMinConfig, minJson);\n      client.publish(minConfigTopic, minJson, false);\n    } else {\n      client.publish(minConfigTopic, "", false);\n    }\n    \n\n    if(maxPosition == -1){\n      DynamicJsonDocument mqttMaxConfig(515);\n      mqttMaxConfig["name"] = mqttMaxDeviceName;\n      mqttMaxConfig["ic"] = "mdi:blinds";\n      mqttMaxConfig["cmd_t"] = maxCommandTopic;\n      mqttMaxConfig["stat_t"] = maxStateTopic;\n      mqttMaxConfig["avty_t"] = coverAvailabilityTopic;\n      mqttMaxConfig["uniq_id"] = mqttMaxDeviceClientId;\n      mqttMaxConfig["dev"] = mqttDevConfig;\n  \n      char maxJson[515];\n      serializeJsonPretty(mqttMaxConfig, maxJson);\n      client.publish(maxConfigTopic, maxJson, false);\n    } else {\n      client.publish(maxConfigTopic, "", false);\n    }\n\n  }\n\n  //Setup OTA\n  {\n    ArduinoOTA.setHostname((mqttCoverDeviceClientId + "-" + String(ESP.getChipId())).c_str());\n\n    ArduinoOTA.onStart([]() {\n      Serial.println("Start");\n    });\n    ArduinoOTA.onEnd([]() {\n      Serial.println("\\nEnd");\n    });\n    ArduinoOTA.onProgress([](unsigned int progress, unsigned int total) {\n      Serial.printf("Progress: %u%%\\r", (progress / (total / 100)));\n    });\n    ArduinoOTA.onError([](ota_error_t error) {\n      Serial.printf("Error[%u]: ", error);\n      if (error == OTA_AUTH_ERROR) Serial.println("Auth Failed");\n      else if (error == OTA_BEGIN_ERROR) Serial.println("Begin Failed");\n      else if (error == OTA_CONNECT_ERROR) Serial.println("Connect Failed");\n      else if (error == OTA_RECEIVE_ERROR) Serial.println("Receive Failed");\n      else if (error == OTA_END_ERROR) Serial.println("End Failed");\n    });\n    ArduinoOTA.begin();\n  }\n}\n\nvoid loop() {\n  //OTA client code\n  ArduinoOTA.handle();\n\n  //while connected we send the current door status\n  //and trigger relay if we need to\n  if (client.connected()) {\n    client.loop();\n\n    //only activate motor if we \n    if(motorDirection == OPEN || motorDirection == CLOSE){\n      int stepsBetweenMinMax = maxPosition - minPosition;\n\n      if (motorDirection == OPEN) {\n        if((minPosition != -1 && currentPosition == minPosition) ||\n            stepsBetweenMinMax == 0){\n          stopAndPublishState(motorDirection);\n        } else {\n          small_stepper.step(-1);\n          currentPosition = currentPosition - 1;\n        }\n        \n      } else if (motorDirection == CLOSE) {\n        if((maxPosition != -1 && currentPosition == maxPosition) ||\n            stepsBetweenMinMax == 0){\n          stopAndPublishState(motorDirection);\n        } else {\n          small_stepper.step(1);\n          currentPosition = currentPosition + 1;\n        }\n      }\n\n      Serial.print("Current Position: ");\n      Serial.println(currentPosition);\n     \n        \n      DynamicJsonDocument doc(50);\n      JsonObject currJson = doc.to<JsonObject>();\n      currJson["min"] = minPosition;\n      currJson["max"] = maxPosition;\n      currJson["current"] = currentPosition;\n      publishDebugJson(currJson);\n      if(helper.saveconfig(currJson)){\n        //client.publish(maxConfigTopic, "", false);\n      } else {\n        motorDirection = STOP;\n        stopPowerToCoils();\n      }\n    }\n    \n  } else {\n    connectClient();\n  }\n}\n\nvoid publishDebugJson(JsonObject json){\n  char mqttJson[50];\n  serializeJsonPretty(json, mqttJson);\n  client.publish(coverDebugTopic, mqttJson, false);\n}\n\nvoid stopAndPublishState(int finishedState){\n  stopPowerToCoils();\n  motorDirection = STOP;\n  const char* endState = opened;\n  if(finishedState == CLOSE){\n    endState = closed;\n  }\n  client.publish(coverStateTopic, endState, true);\n}\n\nvoid setup_wifi() {\n\n  delay(10);\n  // We start by connecting to a WiFi network\n  Serial.println();\n  Serial.print("Connecting to ");\n  Serial.println(ssid);\n\n  //Set WiFi mode so we don't create an access point.\n  WiFi.mode(WIFI_STA);\n  WiFi.begin(ssid, password);\n\n  while (WiFi.status() != WL_CONNECTED) {\n    delay(500);\n    Serial.print(".");\n  }\n\n  Serial.println("");\n  Serial.println("WiFi connected");\n  Serial.println("IP address: ");\n  Serial.println(WiFi.localIP());\n}\n\nboolean connectClient() {\n  // Loop until we're connected\n  while (!client.connected()) {\n    Serial.print("Attempting MQTT connection...");\n    // Check connection\n    if (client.connect(mqttCoverDeviceClientId.c_str(), mqtt_user, mqtt_password, coverAvailabilityTopic, 0, true, payloadNotAvailable)) {\n      // Make an announcement when connected\n      Serial.println("connected");\n      client.publish(coverAvailabilityTopic, payloadAvailable, true);\n\n      client.subscribe(coverCommandTopic);\n      client.subscribe(coverAvailabilityTopic);\n      client.subscribe(resetCommandTopic);\n      client.subscribe(minCommandTopic);\n      client.subscribe(maxCommandTopic);\n\n      Serial.println("Subscribed to: ");\n      Serial.println(coverCommandTopic);\n      Serial.println(coverAvailabilityTopic);\n      Serial.println(resetCommandTopic);\n      Serial.println(minCommandTopic);\n      Serial.println(maxCommandTopic);\n      return true;\n    } else {\n      Serial.print("failed, rc=");\n      Serial.print(client.state());\n      Serial.println(" try again in 5 seconds");\n      // Wait 5 seconds before retrying\n      delay(5000);\n      return false;\n    }\n  }\n  return true;\n}\n\nvoid callback(char* topic, byte* message, unsigned int length) {\n\n  String messageStr;\n\n  for (int i = 0; i < length; i++) {\n    messageStr += (char)message[i];\n  }\n\n  if (String(topic) == coverCommandTopic) {\n    Serial.print("Home Assistant Command: ");\n    Serial.println(messageStr);\n\n    if (messageStr == payloadStop) {\n      stopPowerToCoils();\n      motorDirection = STOP;\n    }\n\n    if (messageStr == payloadOpen) {\n      motorDirection = OPEN;\n    }\n\n    if (messageStr == payloadClose) {\n      motorDirection = CLOSE;\n    }\n  }\n\n  if (String(topic) == resetCommandTopic) {\n    Serial.print("Home Assistant Config Reset Blinds: ");\n    Serial.println(messageStr);\n\n    if(messageStr == "ON"){\n      helper.deletefile();\n      ESP.reset();\n    } \n  }\n\n  if(String(topic) == minCommandTopic){\n    Serial.print("Home Assistant Config Set Blinds Min: ");\n    Serial.println(messageStr);\n\n    if(messageStr == "ON"){\n      minPosition = currentPosition;\n      DynamicJsonDocument minDoc(50);\n      JsonObject minJson = minDoc.to<JsonObject>();\n      minJson["min"] = minPosition;\n      minJson["max"] = maxPosition;\n      minJson["current"] = currentPosition;\n      client.publish(coverDebugTopic, "Min Position Set", false);\n      publishDebugJson(minJson);\n      if(helper.saveconfig(minJson)){\n        client.publish(minConfigTopic, "", false);\n        client.publish(coverStateTopic, opened, false);\n      }\n    }\n  }\n\n  if(String(topic) == maxCommandTopic){\n    Serial.print("Home Assistant Config Set Blinds Max: ");\n    Serial.println(messageStr);\n\n    if(messageStr == "ON"){\n      maxPosition = currentPosition;\n      DynamicJsonDocument maxDoc(50);\n      JsonObject maxJson = maxDoc.to<JsonObject>();\n      maxJson["min"] = minPosition;\n      maxJson["max"] = maxPosition;\n      maxJson["current"] = currentPosition;\n      client.publish(coverDebugTopic, "Max Position Set", false);\n      publishDebugJson(maxJson);\n      if(helper.saveconfig(maxJson)){\n        client.publish(maxConfigTopic, "", false);\n        client.publish(coverStateTopic, closed, false);\n      }\n    }\n  }\n}\n\nvoid stopPowerToCoils() {\n  digitalWrite(D1, LOW);\n  digitalWrite(D2, LOW);\n  digitalWrite(D3, LOW);\n  digitalWrite(D4, LOW);\n}\n
\n", "Title": "How to get stepper motor to rotate smoothly and continuously?", "Tags": "|arduino|microcontrollers|", "Answer": "

The loop functions does quite a lot of things, some of them probably a bit lengthy. So it takes a while to execute and get to the next call of loop and the next step.

\n

Add (local) traces with precise timestamps to see how much time it's spending in the various parts of the loop (and how often the loop is run) to confirm and isolate the bits that are problematic.

\n" }, { "Id": "5290", "CreationDate": "2020-10-07T19:51:33.230", "Body": "

I am new to this IoT protocol area. My understanding is that MQTT is a lightweight messaging protocol for IOT devices.

\n

MQTT over web socket involves use of HTTP to UPGRADE the connection to use web socket. Otherwise they follow same protocol for data exchange.

\n

Both sit on top of the TCP layer.

\n

Both support a persistent connection.

\n

Both support pub/sub model.

\n

The use case difference between the 2 is said to be that MQTT over web socket is ideal when client is a browser since it is difficult to implement MQTT in browser (but this can be made possible by using Socket API).

\n

So what exactly is the technical difference between MQTT and MQTT over web socket that allows the later to be preferred by web browser apps?

\n", "Title": "What is the technical difference between MQTT and MQTT over web socket that allows the later to be preferred choice by web browser apps?", "Tags": "|mqtt|web-sockets|", "Answer": "

Nothing, as I stated in the answer on Stack Overflow they are byte for byte exactly the same protocol, it's just the transport layer which changes from raw TCP to Websockets.

\n

The difference is that you CAN NOT open raw sockets from within the browser, the security sandbox will not allow you to open arbitrary TCP connections to random hosts. The sandbox will only allow HTTP based protocols, which includes WebSockets because they bootstrap from an initial HTTP request.

\n

The other benefit of MQTT over Websockets is that it can make use of existing proxy infrastructure if needed.

\n" }, { "Id": "5300", "CreationDate": "2020-10-18T06:53:22.023", "Body": "

I have a Bosch Easycontrol thermostat and it's supposed to be compatible with ifttt.

\n

What happens though is that I click on the integration and no matter which action I choose, the field "which room" is stuck in "Loading..." and I cannot complete any action. I tried re-connecting the service and the standard ways but nothing works.

\n

I'd like to know how to fix it or at least if it's an ifttt issue or a Bosch issue.

\n", "Title": "Ifttt not filling fields", "Tags": "|ifttt|", "Answer": "

It turns out that I forgot the basic step of actually connecting the device to IFTTT via the "external service" menu in the smartphone app.

\n

What I did was open the app on the phone, click "settings", then "external services", then "Easycontrol pairing page".

\n

That will require the serial number, access code and password.

\n

The first two are visible in the app under "Info" then "About", while the password is the one set in the app, NOT the one for the Bosch identity.

\n" }, { "Id": "5302", "CreationDate": "2020-10-19T12:31:32.043", "Body": "

I would like to ask a question that I have about MQTT client broker communication. Let's say, a client C1 is connected to the broker with user name and password (with no encryption).

\n

What happens when a new client C2 (a malicious one) tries to publish or subscribe to a topic without actually initiating connection and pretending to be C1? Are there any means by which the server can figure out that C1 is not the one trying to communicate?

\n

This question arises because after the connection process, the broker does not really check for credentials each time it is trying to subscribe or publish to a topic (as far as I am aware - may be wrong altogether).

\n", "Title": "MQTT Client Broker communication - can a malicious client masquerade as a legitimate one after being connected with credentials", "Tags": "|mqtt|security|", "Answer": "

As with nearly any unencrypted protocol, there is no man in the middle protection.

\n

If this is a treat model you need to protect against then running MQTT over SSL/TLS is the solution.

\n" }, { "Id": "5306", "CreationDate": "2020-10-21T10:44:32.457", "Body": "

I am a newbie to MQTT, I was wondering how could one prevent a client who is not running in the same processor, where the broker is also running, to connect with the broker. In other way, restricting non-local MQTT clients from connecting to the broker.

\n

I know there are different measures to security starting from username/password to SSL. But without any of these, are not there some means to define that no remote client shall ever succeed in establishing a connection?

\n", "Title": "Prevent a remote MQTT client from connecting to a broker", "Tags": "|mqtt|security|", "Answer": "

As with any IP server you can control which interfaces/addresses the server binds to (listens on).

\n

So if you only want clients on the same host as the broker to be able to connect then you just have the broker bind to 127.0.0.1 rather than to 0.0.0.0 (this is a shortcut to bind to all local interfaces)

\n" }, { "Id": "5314", "CreationDate": "2020-10-23T17:14:34.133", "Body": "

I have managed to close my myQ garage door (a LiftMaster) (by Chamberlain Group) using IFTTT. Here's the setup

\n\n

When setting up the IFTTT trigger and action, the actions available when searching for and selecting myQ presents 3 options

\n\n

There is no "Open door". Selecting "Close door" identifies my garage door as an option, and the trigger works when the door is already open.

\n

Because there is no "Open door", I tried to use the same trigger when the door was closed to open the door (hoping this was a toggle rather than an explicit option) but this did not work.

\n

Are there any options here? It seems like this is an issue on the IFTTT side of things, but I am not sure.

\n", "Title": "Linking myQ garage door to IFTTT", "Tags": "|smart-home|ifttt|", "Answer": "

It was intentionally omitted for security reason, you do not want anyone (hacker or intruder when integrated with other smart home like Alexa and others) to open the door.

\n" }, { "Id": "5318", "CreationDate": "2020-10-24T19:08:17.863", "Body": "

Could someone help me to configure SmartIR, available on github to use with Tasmota over MQTT?

\n

I have installed the addon with HACS and have generic IR flashed with TasmotaIR (when aiming with remote control to the IR box I can see it's receiving commands on the console).

\n

I'm stuck trying to get working with Gree air conditioner and Samsung TV

\n

This is my configuration.yaml

\n
smartir:\n\nmedia_player:\n  - platform: smartir\n    name: Bedroom TV\n    unique_id: bedroom_tv\n    device_code: 1060\n    controller_data: cmnd/tasmota_79A072/IRsend\n    #power_sensor: binary_sensor.tv_power\n\nclimate:\n  - platform: smartir\n    name: Bedroom AC\n    unique_id: bedroom_ac\n    device_code: 1180\n    controller_data: cmnd/tasmota_79A072/IRsend\n    temperature_sensor: sensor.temp_hum_cuarto_temperature\n    humidity_sensor: sensor.temp_hum_cuarto_humidity\n\n
\n

When pushing any buttons on generated cards nothing happens (on the Tasmota console, nothing is showing).

\n", "Title": "How to configure Home Assistant to get working SmartIR integration with Tasmota IR over MQTT?", "Tags": "|home-assistant|tasmota|", "Answer": "

Ok there are two key properties here to get smartir working over MQTT (with tasmota)

\n\n

device_code

\n

I thought that just looking for codes in the github repo here and then setting the value for my device will make it work, but it didn't.

\n

You will need first to upload the [code].json file to your home assistant installation at config/custom_components/smartir/codes/climate for air conditioner or config/custom_components/smartir/codes/media_player for tv.

\n

That's sounds easy but the most complicated part is that major .json files available on the git repo are for broadlink hardware and don't work for tasmotized devices that communicates through MQTT so you have two options.

\n
    \n
  1. Google a lot to try to find someone sharing .json file for the same device as yours.
  2. \n
  3. Get codes yourself using the original remote control.
  4. \n
\n

To get codes yourself you need to access to your tasmotized device through it web interface and go to "Console". Then get your remote control and pointing to your IR device push a button.\nYou will see on console information regarding this button you pressed.

\n

\"enter

\n

This information you will use to construct your own .json file like this (Samsung TV)

\n
//SAMSUNG TV\n{\n    "manufacturer": "Samsung",\n    "supportedModels": [\n      "UE55F8000",\n      "UExxF8000"\n    ],\n    "supportedController": "MQTT",\n    "commandsEncoding": "Raw",\n    "commands": {\n        "off": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E019E6\\",\\"DataLSB\\":\\"0x070702FD\\",\\"Repeat\\":0}",\n        "on": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E09966\\",\\"DataLSB\\":\\"0x070702FD\\",\\"Repeat\\":0}",\n        "previousChannel": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E008F7\\",\\"DataLSB\\":\\"0x070710EF\\",\\"Repeat\\":0}",\n        "nextChannel": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E048B7\\",\\"DataLSB\\":\\"0x070712ED\\",\\"Repeat\\":0}",\n        "volumeDown": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0D02F\\",\\"DataLSB\\":\\"0x07070BF4\\",\\"Repeat\\":0}",\n        "volumeUp": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0E01F\\",\\"DataLSB\\":\\"0x070707F8\\",\\"Repeat\\":0}",\n        "mute": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0F00F\\",\\"DataLSB\\":\\"0x07070FF0\\",\\"Repeat\\":0}",\n        "sources": {\n            "DTV": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0C23D\\",\\"DataLSB\\":\\"0x070743BC\\",\\"Repeat\\":0}",\n            "Antenna": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0D827\\",\\"DataLSB\\":\\"0x07071BE4\\",\\"Repeat\\":0}",\n            "HDMI": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0D12E\\",\\"DataLSB\\":\\"0x07078B74\\",\\"Repeat\\":0}",\n            "HDMI 1": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E09768\\",\\"DataLSB\\":\\"0x0707E916\\",\\"Repeat\\":0}",\n            "HDMI 2": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E07D82\\",\\"DataLSB\\":\\"0x0707BE41\\",\\"Repeat\\":0}",\n            "HDMI 3": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E043BC\\",\\"DataLSB\\":\\"0x0707C23D\\",\\"Repeat\\":0}",\n            "HDMI 4": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0A35C\\",\\"DataLSB\\":\\"0x0707C53A\\",\\"Repeat\\":0}",\n            "3D": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E08679\\",\\"DataLSB\\":\\"0x0707619E\\",\\"Repeat\\":0}",\n            "Channel 0": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E08877\\",\\"DataLSB\\":\\"0x070711EE\\",\\"Repeat\\":0}",\n            "Channel 1": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E020DF\\",\\"DataLSB\\":\\"0x070704FB\\",\\"Repeat\\":0}",\n            "Channel 2": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0A05F\\",\\"DataLSB\\":\\"0x070705FA\\",\\"Repeat\\":0}",\n            "Channel 3": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0609F\\",\\"DataLSB\\":\\"0x070706F9\\",\\"Repeat\\":0}",\n            "Channel 4": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E010EF\\",\\"DataLSB\\":\\"0x070708F7\\",\\"Repeat\\":0}",\n            "Channel 5": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0906F\\",\\"DataLSB\\":\\"0x070709F6\\",\\"Repeat\\":0}",\n            "Channel 6": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E050AF\\",\\"DataLSB\\":\\"0x07070AF5\\",\\"Repeat\\":0}",\n            "Channel 7": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E030CF\\",\\"DataLSB\\":\\"0x07070CF3\\",\\"Repeat\\":0}",\n            "Channel 8": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0B04F\\",\\"DataLSB\\":\\"0x07070DF2\\",\\"Repeat\\":0}",\n            "Channel 9": "{\\"Protocol\\":\\"SAMSUNG\\",\\"Bits\\":32,\\"Data\\":\\"0xE0E0708F\\",\\"DataLSB\\":\\"0x07070EF1\\",\\"Repeat\\":0}"\n        }\n    }\n}\n\n
\n

Here another example but for climate

\n
//GREE \n\n{\n  "manufacturer":"Gree",\n  "supportedModels":[\n    "GREE"\n  ],\n  "supportedController":"MQTT",\n  "commandsEncoding":"Raw",\n  "minTemperature":16.0,\n  "maxTemperature":30.0,\n  "precision":1.0,\n  "operationModes":[\n    "heat_cool",\n    "fan_only",\n    "dry",\n    "cool",\n    "heat"\n  ],\n  "fanModes":[\n    "low",\n    "mid",\n    "high",\n    "auto"\n  ],\n  "commands":{\n    "off":"{\\"Vendor\\":\\"GREE\\", \\"Power\\":\\"Off\\"}",\n    "heat_cool":{\n      "low":{\n          //The following row must be replicated incrementing value 16 at property name end and also at Temp propertie inside. This will make your conditioner set those temps. Removed for code brevity\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Auto\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Min\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",        \n      },\n      "mid":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Auto\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Medium\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",        \n      },\n      "high":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Auto\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Max\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",        \n      },\n      "auto":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Auto\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Auto\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",        \n      }\n    },\n    "fan_only":{\n      "low":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"fan_only\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Min\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "mid":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"fan_only\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Medium\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "high":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"fan_only\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Max\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "auto":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"fan_only\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Auto\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        }\n    },\n    "dry":{\n      "low":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Dry\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Min\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "mid":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Dry\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Medium\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "high":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Dry\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Max\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "auto":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Dry\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Dry\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        }\n    },\n    "cool":{\n      "low":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Cool\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Min\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "mid":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Cool\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Medium\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "high":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Cool\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Max\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "auto":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Cool\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Cool\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        }\n    },\n    "heat":{\n      "low":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Heat\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Min\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n         },\n      "mid":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Heat\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Medium\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "high":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Heat\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Max\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        },\n      "auto":{\n        "16":"{\\"Vendor\\":\\"GREE\\",\\"Model\\":1,\\"Mode\\":\\"Heat\\",\\"Power\\":\\"On\\",\\"Celsius\\":\\"On\\",\\"Temp\\":16,\\"FanSpeed\\":\\"Heat\\",\\"SwingV\\":\\"Highest\\",\\"SwingH\\":\\"Off\\",\\"Quiet\\":\\"Off\\",\\"Turbo\\":\\"Off\\",\\"Econo\\":\\"Off\\",\\"Light\\":\\"On\\",\\"Filter\\":\\"Off\\",\\"Clean\\":\\"Off\\",\\"Beep\\":\\"Off\\",\\"Sleep\\":-1}",\n        }\n    }\n  }\n}\n\n
\n

In any case you finally will have your xxxx.json file uploaded on any of those folders and device_code property configured with same number.

\n

controller_data

\n

Here you just need to specify the MQTT topic between some other parameters.\nMQTT topic is present on your tasmotized device configuration

\n

\"enter

\n

cmnd/<your_mqtt_topic_here>/IRhvac for air conditioner

\n

cmnd/<your_mqtt_topic_here>/IRsend for tv

\n

Example of configuration.yaml configuration

\n
\nsmartir:\n  check_updates: false\n\nmedia_player:\n  - platform: smartir\n    name: Bedroom TV\n    unique_id: bedroom_tv\n    device_code: 1070\n    controller_data: cmnd/tasmota_smart_ir_bedroom/IRsend\n    #power_sensor: media_player.chromecast_cuarto\n\nclimate:\n  - platform: smartir\n    name: Bedroom AC\n    unique_id: bedroom_ac\n    device_code: 1180\n    controller_data: cmnd/tasmota_smart_ir_bedroom/IRhvac\n    temperature_sensor: sensor.temp_hum_cuarto_temperature\n    humidity_sensor: sensor.temp_hum_cuarto_humidity\n
\n

Feel free to let me know if something isn't clear!

\n" }, { "Id": "5349", "CreationDate": "2020-11-08T18:12:41.447", "Body": "

I'm learning AWS and I created a new thing with the required certificates to access it via MQTT:

\n\n

If I'm not wrong the root-CA.crt file will be the same for all the things because it's used to authenticate the server.

\n

What I don't understand if I need to create a different set of certificates (private.key and cert.pem) for every device I will build.

\n", "Title": "Do I need different certificates for every thing?", "Tags": "|networking|security|aws-iot|aws|", "Answer": "

Yes, the whole point of using client side certificates is to enable you to reliably uniquely identify each client.

\n

AWS will provide APIs to provision each device with it's own cert/key.

\n

The other reason is that it means that you can easily ban a single device if the certificate is compromised, if every device reuses the same certificate then you have to recall all the devices and update them if somebody gets hold of the key/cert pair.

\n" }, { "Id": "5357", "CreationDate": "2020-11-11T15:24:13.003", "Body": "

I modified my /etc/mosquitto/mosquitto.conf to start on a different port and also require username password authentication. I then started my service like so:

\n
 /usr/local/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf\n
\n

I confirmed that all my devices and websites are able to connect and use this service in the way I expect, so everything is perfect.

\n

I want to SSH into my server and enter the command line interface to visually monitor some activities . So I typed this command into bash mosquitto. But I get the following output:

\n
605108095: mosquitto version 1.4.10 (build date 2020-06-13 20:47:29+0000) starting\n1605108095: Using default config.\n1605108095: Opening ipv4 listen socket on port 1883.\n1605108095: Opening ipv6 listen socket on port 1883.\n
\n

The message says I've logged into an instance of mosquitto that is running on port 1883. This NOT the instance I want to be monitoring. I want to monitor the instance that was initialized by /usr/local/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf. How can I do that?

\n", "Title": "How to login to an existing mosquitto instance to review activities?", "Tags": "|mqtt|", "Answer": "

The mosquitto command is purely for starting an instance of the broker, running it again will not get you any information about an already running instance.

\n

The default config installed normally pushes the logs to a file that is likely to be in /var/log/mosquitto if not it logs to stdout in the terminal you started it.

\n

If you want to monitor messages being published then you need to use the mosquitto_sub command to act as a MQTT client. e.g.

\n
mosquitto_sub -v -t '#'\n
\n

This command will connect to a broker running on the same machine and subscribe to the wildcard topic of # (which gets everything).

\n

You mentioned that you changed the port so you will need to add -p [portnumber] and if you need to add a username/password it's -u [username] -P [password]

\n" }, { "Id": "5363", "CreationDate": "2020-11-18T10:45:33.370", "Body": "

W.r.t. formatting the payload content of MQTT messages one usually sticks to standards. In case the AWS IoT platform is used the format of the payload content is not pre-defined. There's just a best practice design guide. However usually one sticks to standards w.r.t. payload content format like SensorML which is used to add meta data for distributed sensors of solar power plants for example. Are there JSON format schemas/standards for the process automation domain as well?

\n", "Title": "MQTT JSON format for process automation industry?", "Tags": "|protocols|standards|", "Answer": "

The Sparkplug working group from Eclipse foundation is trying to create a standard for MQTT topics and payload formating. The specification is available here. You can also find more information on the FAQ.

\n" }, { "Id": "5368", "CreationDate": "2020-11-19T13:10:51.680", "Body": "

The WiFi of my Google Home Mini is dead after 1 year or so of use. Yes, I have tried everything to verify this (factory reset, Wi-Fi with password, without password, different bands, two totally different Wi-Fi networks).

\n

I hate to use Google Home Mini only as a book weight now. Are there any other ways to use it? For example, is there a way to use its speaker which is still decent?

\n", "Title": "Is there any use for a Google Home Mini without WiFi?", "Tags": "|google-home|", "Answer": "

There are 3rd party Ethernet adapters (and Google used to do one for Chromecasts that I think should work) which assuming it's just the Wifi that is broken will allow you to wire the Mini into the local Lan.

\n" }, { "Id": "5371", "CreationDate": "2020-11-20T17:29:26.747", "Body": "

I asked "Alexa, what's the distance between San Francisco and Los Angeles?". Alexa replied "11,711.8 kilometers away", which is absolutely incorrect.

\n

Why does this happen? Am I missing something here? What can I do about such incidents?

\n", "Title": "Incorrect answer from Alexa", "Tags": "|alexa|amazon-echo|", "Answer": "

For some reason it would appear that Alexa is trying to find the distance between you and either San Fransisco and Los Angeles... Perhaps Alexa is finding some place near you which is called Los Angeles or San Fransisco (like a restaurant or anything)?

\n

I would suggest asking "what's the distance between San Fransisco California and Los Angeles California"?

\n

This should give Alexa the "hint" it needs to realise that you're not talking about something local.

\n" }, { "Id": "5381", "CreationDate": "2020-11-29T22:07:24.317", "Body": "

I have some questions about a new Arduino project I'm starting:

\n
    \n
  1. Since Vodafone (Italy in my case) is shutting down 3G in 2021 (keeping 2G/4G up apprently), is ARDUINO MKR GSM 1400 still valid in next years?

    \n
  2. \n
  3. Does Arduino MKR NB 1500 work with classic 4G sim too? On the website they claim it works with LTE's Cat M1/NB1 bands but I have a Thingsmobile all-in-one sim card I'd like to use. It's a virtual Telco popular for M2M.

    \n
  4. \n
  5. Can ARDUINO MKR MEM SHIELD and ARDUINO MKR CAN SHIELD co-exist together even if they both use SPI?

    \n
  6. \n
\n

Thanks!

\n", "Title": "Arduino MKR GSM/NB + CAN + SD Card", "Tags": "|arduino|", "Answer": "
    \n
  1. The spec sheet for the radio module says it is a HSPA and 2G module, HSPA is "3G". So yes this should continue to work assuming that the local 2G network runs on one of the supported frequency bands.
  2. \n
  3. That is a question for your SIM provider, the only way to be sure is to ask them directly.
  4. \n
  5. You can have multiple devices on a SPI bus, but they each need their own Chip Select pin to enable each module separately so they don't clash.
  6. \n
\n" }, { "Id": "5384", "CreationDate": "2020-12-01T03:30:02.493", "Body": "

I am trying to subscribe to a mosquitto server that I installed and configured with this command

\n
mosquitto_sub -h myserver.myserver.myserver -p 9500 -t "test" -u "myuser" -P "my-correct-password" --capath /etc/ssl/certs/\n
\n

Where I substituted my actual values for myserver.myserver.myserver, myuser and my-correct-password. When I run this command, my terminal doesn't give any response. It doesn't even disconnect after waiting for a long time.

\n

However, if I replace my-correct-password with a password I know is incorrect, I get the response Connection Refused: not authorised. How come I can't subscribe to the mosquitto server with a correct password? And how come I only get a response from the server if I supply an erroneous password?

\n

I can't remember this being a problem in the past...I'm pretty sure I ran this command successfully in the past.

\n", "Title": "mosquitto_sub command gives no respond when correct password?", "Tags": "|mqtt|mosquitto|", "Answer": "

From all the available information in the question, the 2 simplest options.

\n
    \n
  1. There are no messages being published on the topic subscribed to.
  2. \n
  3. There is an ACL in place on the broker and the user is not authorised to see messages on the topic subscribed to.
  4. \n
\n

Add -d to the command line to see if the connection and the subscription actually complete successfully.

\n" }, { "Id": "5387", "CreationDate": "2020-12-01T13:03:31.580", "Body": "

I just want to know how many clients are actively connected to my mosquitto server. Or even better, get a list of client ids connected to my mosquitto server. I read some documentation suggesting the topic $SYS/broker/clients/connected will give this information. But this command yielded no response and no results:

\n
mosquitto_sub -h myserver.myserver.myserver -p 9500 -t $SYS/broker/clients/connected -u "my-user" -P "my-password" --capath /etc/ssl/certs/\n
\n

(I replaced myserver.myserver.myserver and my-user and my-password with actual values.) I verified the connection is working because if I publish a message to the same topic, the message appears.

\n

How can I get a list of clients with active connection to my mosquitto server? Or at least a numeric count of active connections?

\n", "Title": "Count or list active client connections to mosquitto server", "Tags": "|mosquitto|", "Answer": "

This seems to work:

\n
netstat -ntp | grep ESTABLISHED.*mosquitto\n
\n

Which in my case outputs:

\n
tcp        0      0 10.42.0.2:1883          10.42.0.18:56553        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.19:54037        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.11:49321        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.15:48685        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.12:57691        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.13:56037        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.17:40679        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.16:39627        ESTABLISHED 448/mosquitto       \ntcp        0      0 10.42.0.2:1883          10.42.0.14:33079        ESTABLISHED 448/mosquitto       \n
\n

If one only cares about the total count:

\n
netstat -natp | grep ESTABLISHED.*mosquitto | wc -l\n
\n

prints

\n
9\n
\n

netstat arguments: -n to avoid getting host names for ip addresses (faster), -t for TCP instead of UDP and -p to display program names so we can filter using grep.

\n" }, { "Id": "5395", "CreationDate": "2020-12-06T06:35:33.357", "Body": "

I have device A which that device made by only GPS+CONTROLLER.

\n

Can that GPS report their location to device B which made by Controller or it will better report to internet without using GSM module or WiFi module? Is it possible?

\n

If possible what device B contains and made? Or what device A should modify?

\n", "Title": "Can GPS device report to other device without internet?", "Tags": "|gps|cloud-computing|tracking-devices|", "Answer": "

There is two ways are there in GPS working, GPS will receive the information from safelight and locate you, one is AGPS , that will locate you very accurately and give you the data, this method for sure it needs Internet protocol to fetch the AGPS location itself, another method can locate you with satellite radio method , this method you can check in mobile when your data is off , satellite can locate you but initial fetching will take more time., so that now the ideology is it depends on the application, as you mentioned if you want to send the data continuously to the controller you need a protocol device like wifi or GSM on other hand locally you can use Bluetooth, on other hand your desire is only designing and tracing the geological location of the particular object in the sense you can do it without Internet and to make it effective you have to design it well

\n" }, { "Id": "5419", "CreationDate": "2020-12-18T01:14:47.253", "Body": "

I'm working on a project that requires multiple ESP32s to be able to receive a signal from a smartphone in order to close/open a set of doors. The catch is that this will be happening in a remote area with no internet whatsoever, and on a wide scale (hundreds of ESP devices). The setup also needs to be portable, as it will be moving around a lot. What is the best way of going about this sort of local network? So far I have considered:

\n\n

Does anyone have any suggestions?

\n", "Title": "Best way to set up a portable local IoT network?", "Tags": "|mqtt|esp32|mesh-networks|mobile-applications|", "Answer": "

The only two network types you can count on on a phone are WiFi and BLE (or cellular, but you told us there\u2019s no Internet). LoRa is indeed never available.

\n

WiFi requires the user to connect to the network (network name and usually password, though this can be provided as a QR Code), and phones don\u2019t always like WiFi networks without Internet access (they may complain or refuse to connect). But it\u2019s usable on any phone, even without an app.

\n

BLE usually requires an app to be installed on the phone, but can then work without the user having to enter any credentials.

\n

In both cases, range is limited, though very variable depending on the environment and the devices on both sides.

\n

One solution could be to set up a regular WiFi network with APs and a router (to act as a DHCP server), and have both phones and ESP32s connect to it. You may also need a DNS server, depending on what exactly you do. Depending on the size of the area to be covered and the number of devices, you may need several APs and either cables between them (possibly an Ethernet switch as well), or wireless links (e.g. \u201cWiFi mesh\u201d solutions).

\n

Another solution would be for the current phones to use BLE to talk to one of the ESP32s in range and then use some form of mesh between the ESP32s.

\n

Remember that the ESP32s can also act as APs, though I probably wouldn\u2019t count on them to connect lots of phones.

\n

Really, we don\u2019t have enough details to know which of those solutions could work for you, but I hope they give you starting points.

\n" }, { "Id": "5430", "CreationDate": "2020-12-31T11:07:01.820", "Body": "

I've developed an Android app for my dad for monitoring temperature from a specific room.\nHe wants to be able to see the temperature even when he is not home.

\n

I got working with Amazon IoT and it's pretty great:

\n
    \n
  1. You publish the temperature to the Amazon IoT server
  2. \n
  3. You forward the result to your app
  4. \n
\n

But I also want a device like this using the same app, so how can I make my android app identify with a specific device? I don't want a login mechanism, I just want to give my dad a digital key and based on that my app will identify which devices are his and subscribe only to that.

\n

What am I thinking is this: generate a key that will be stored on the raspberry and when you send a message to the server append that ID. Similar when using the app it will ask for the key which you should get from a admin.

\n

Is this a good way? or do you now an easier alternative through Amazon IoT?

\n", "Title": "Monitoring temperature using AWS IoT", "Tags": "|mqtt|raspberry-pi|aws-iot|", "Answer": "

Provision an IoT thing. For your purpose, you can just do a one off.\nIn your case, use the client id for your dad's device in the SQL where clause of the IoT rule.\nhttps://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html#iot-sql-function-clientid

\n

Another idea is to change the topic that is published. Perhaps temperature/dadsRoom and use that in the IoT rule. This would probably be the easiest to merge later if you want a report of all rooms.

\n" }, { "Id": "5446", "CreationDate": "2021-01-11T04:07:05.357", "Body": "

I was wondering if anyone can give me some pseudocode structure that outlines the general procedures that need to happen to connect over cellular to a remote server using MQTT? To be specific, I already know all about MQTT basics. I need to write a library for this and I would just really appreciate some technical guidance on how to structure the code. If it helps at all, I'm using a Lara R2-02 Ublox cellular module from mikroe for my modem. If you have any technical resources you could share with me too, I would be grateful.

\n", "Title": "What is a pseudocode breakdown of the MQTT protocol for cellular devices?", "Tags": "|mqtt|wireless|", "Answer": "

The general steps are:

\n\n

If you get stuck at a specific step, you maybe able to ask a more specific question that someone can give you a specific answer to.\nOne other option is to use AWS IoT device sdk as a starting point. See https://docs.aws.amazon.com/iot/latest/developerguide/iot-sdks.html

\n" }, { "Id": "5448", "CreationDate": "2021-01-11T13:49:11.630", "Body": "

I did some brief search on the matter, but it doesn't really help. I want to detect when people approach each other too closely, and - possibly, in the future - leave too great a distance between one another.

\n\n

All are very interesting articles, and I suppose that I could always build them all & take my own measurements, but would rather save the time, if it is not going to be very accurate, so I am hoping that someone here can speak from experience.

\n

Ideally, I would prefer BLE 5.1, for AOA / AOD, to get extreme accuracy - and direction - but ESP32 doesn't have that, nor is it likely to any time soon (unless I can add it on, somehow? related question)

\n

How accurate would it be if I want to know when two devices approach within 1 metre, 2m, 5m, 10m? And what about far away, towards the edge of reception?

\n

I want to achieve this with something wearable, so like the look of the LILYGO\u00ae TTGO T-Wristband DIY Programmable Smart Bracelet

\n

\"enter

\n

but the actual hardware is for another question, maybe another site.

\n

Does anyone have any experience in this? Can you provide an approximation of granularity of measurement at various distances?

\n", "Title": "ESP32 proximity detection - how near, far and how accurate can it be?", "Tags": "|esp32|wearables|proximity|", "Answer": "

Haven't tried it myself, but a few things to consider:

\n\n

So no, you can't get any sort of accurate distance just from the RSSI, though you can get a vague idea.

\n

Another reference point: Apple's Core Location framework, when it reports a beacon, only reports the following proximity values (iBeacons advertise a calibrated RSSI at 1\u00a0m to assist):

\n
\n\n
\n

They state:

\n
\n

Ranging reports when the two devices are far apart, near to each other, or in the immediate vicinity of each other; it does not offer a precise distance, nor should you rely on the strength of a beacon's signal to compute that information yourself.

\n
\n

So after experimenting with a pair of your own devices, you could determine a threshold over which you are sure that the devices are within a given distance. Depending on your goal, you could be very lax or very restrictive, it's up to you. But don't expect something more accurate than "very close", "quite far" and "somewhere in between".

\n

(Things are different if you have multiple beacons in range and a fingerprint of the area, but that's a completely different topic).

\n

As for max range, that depends a lot on TX power, antennas, sensitivity, cases, relative position, and most importantly obstacles. I've seen BLE devices unable to talk to one another beyond a few short meters, and I've detected BLE devices 20 meters away across many obstacles, possibly more. Only experiments with your chosen devices in the target environment/context will get you a better sense of what is possible.

\n

Without obstacles with high TX power devices you can supposedly reach hundreds of meters with line of sight, but of course this is a scenario that never happens.

\n

One important thing to note is that for detection to happen, you need to advertise and scan. Battery-operated ESP32-based devices are pretty bad at that, as unless I missed something, you can't do that in combination with deep sleep. So you get a pretty high power consumption, and you probably won't last more than a few hours on any battery that would fit in a wrist-mounted case.

\n

Nordic Semi chips of the nRF5 series are better at that, though there are quite a few other options. Wrist-mounted options based on the nrf52832 include Espruino's Bangle.js, though it features quite a few things you might not need (GPS, heart rate monitor, accelerometer, magnetometer...).

\n" }, { "Id": "5464", "CreationDate": "2021-01-16T17:48:26.163", "Body": "

I have been developing projects with Raspberry Pi 4. For the current project I am running a NextCloudPi server on it. I am able to access the server using my Internal IP. I want to access it using a public domain name or in other words through open Internet. My ISP does not allow port forwarding and static IP.

\n

I tried Zerotier, which creates a VPN and I can access the CloudServer through the static IP assigned under VPN but the transfer speeds are really slow. Another issue is that I need to install it on all the devices through which I am going to access the CloudServer and ofcourse on the RPi4 itself.

\n

Is there any other option that you guys have to access RPi4 (basically IoT devices) through Internet?\nAny suggestion would be greatly appreciated.

\n", "Title": "Access Raspberry Pi behind a NAT firewall through Internet", "Tags": "|raspberry-pi|routers|", "Answer": "

I have finally resolved the issue. I have used ngork to create virtual tunnel. I tried to use ngork earlier but was unable as I was running the service on port 80. When I ran the nrogk http service on port 443, I was able to access the NextCloud server using the domain name shown on the ssh terminal. NextCloud uses port 80 for http and 443 for https. I am really not sure why I was not able to access the NextCloud on port 80. This may help if someone is stuck in the similar problem. The problem with ngork is, once the raspberry pi is restarted you will have to restart the ngork service as well and of course the domain address will also be changed. Though my use case don't have any problem with it.

\n" }, { "Id": "5474", "CreationDate": "2021-01-21T11:55:20.833", "Body": "

I am studying about incremental programming of restricted (low power) IoT devices and discovered that many diff algorithms have been introduced by the literature (DASA, R3DIFF, DG, etc).

\n

Trying them out, I found out that xdelta generally produces smaller patches. Hence the evident question is the following: Is xdelta suitable for such an environment and If yes, why do you believe the authors did not simply use xdelta algorithm for their applicationws (and instead developed other diff algorithms)?

\n", "Title": "Diff-patch algorithms", "Tags": "|over-the-air-updates|", "Answer": "

Please share a link to the paper you are alluding to when you mention the authors. Perhaps someone can then think of something specific.\nOtherwise, the general answer is "creating something new maybe better for a thesis than to say 'there exists something already' " !!

\n" }, { "Id": "5482", "CreationDate": "2021-01-25T11:11:08.327", "Body": "

I'm looking into proximity sensors for bicycle attachments. The idea is to detect when a car passes close by to a rider. What are some types of proximity sensors that would be:

\n
    \n
  1. Cheap
  2. \n
  3. Range up to 2 meters
  4. \n
  5. Resistant to weather or can be put inside a protective enclosure and still work
  6. \n
  7. Works with relative motion between sensor and target
  8. \n
  9. Works with rain or high humidity (morning fog) between sensor and target
  10. \n
\n

I also welcome you to propose other considerations I haven't listed above.

\n", "Title": "What types of proximity sensors would fit this bicycle application?", "Tags": "|sensors|proximity|", "Answer": "

There are quite a few technologies that can be used, though I'm not sure about the impact of some of your conditions (e.g. weather) on their operation. Here are a few:

\n\n

You can of course derive speed from the variations in distance for the first two, but it probably won't be as precise and quick.

\n

You can find many of such sensors on Adafruit, Sparkfun, etc, and they are relatively cheap. But I have no idea what the impact of rain or fog may be on them. Note that they are all "active" sensors, which need power to transmit a signal before then receiving a response back, so battery operation may raise its own challenges.

\n" }, { "Id": "5488", "CreationDate": "2021-01-26T18:17:36.817", "Body": "

I've got powerful dual-core IoT gateways in the field with high-speed cellular modems and good internet connections, but they fail to send 2.5 KB MQTT messages to my AWS IoT message broker. My program sends messages of various sizes, and the 0.1 KB or 0.2 KB messages succeed >99% of the time. The 1.5 KB messages are about 50/50, and the 2.5 KB messages succeed less than 10% of the time... if I'm not watching them (It gets weirder).

\n

My gateways will go several days without being able to send in the 2.5 KB message (all the while successfully sending the smaller 0.1 KB and 1.5 KB messages), but as soon as I VPN into the gateway with OpenVPN to investigate, it instantly sends in the 2.5 KB message. It's like asking my kids to do something while I'm gone; as soon as I return, it gets done instantly\u2026 So weird and frustrating!!

\n

Thus, I'm guessing it's got something to do with my gateways' internet connections. I could stream Netflix movies on them, but they can't send 2.5 KB MQTT messages... When I install software on them, they can download megabytes of data in seconds. I'm also guessing it's not AWS IoT, because when I reproduce the problem from my development computer, the 2.5 KB message always publishes successfully to the AWS IoT message broker.

\n

USD $580 gateway specs:

\n\n

Python code for connecting to AWS IoT message broker with Paho MQTT library:

\n
\nclass PahoContainer:\n    def __init__(\n        self,\n        c,\n        mqtt_broker,\n        cert_dir="/home/user/certs",\n        set_on_message=True,\n        set_on_publish=True,\n        aws_thing=None,\n        set_will=True,\n    ):\n        """Connects a client to MQTT broker"""\n\n        self.c = c\n        self.mqtt_broker = mqtt_broker\n        self.cert_dir = cert_dir\n        self.set_on_message = set_on_message\n        self.set_on_publish = set_on_publish\n        self.aws_thing = aws_thing\n\n        self.connect(set_will=set_will)\n\n    def on_connect(self, client, userdata, flags, rc):\n        """The callback for when the client receives a CONNACK response from the server."""\n\n        self.c.logger.info(f"Paho connected with result code: {rc}")\n\n        # If the result code == 0 == True, set the connected_flag = True\n        if rc == 0:\n            self.c.logger.info(f"Setting Paho client.connected_flag = True")\n            client.connected_flag = True\n\n    def on_disconnect(self, client, userdata, rc):\n        """\n        The callback for a disconnection. You will need to reconnect as soon as possible.\n\n        Since we run a network loop using loop_start() or loop_forever(), the re-connections are automatically handled.\n\n        A new connection attempt is made automatically in the background every 3 to 6 seconds.\n        """\n        self.c.logger.info(f"on_disconnect callback. Disconnection reason rc: '{rc}'")\n\n        client.connected_flag = False\n        client.disconnect_flag = True\n\n    def on_message(self, client, userdata, msg):\n        """The callback for when a PUBLISH message is received from the server"""\n        self.c.logger.info(f"Paho msg.topic: {msg.topic}; str(msg.payload): {str(msg.payload)}")\n\n    def on_publish(self, client, userdata, mid):\n        """The callback for when a PUBLISH message is sent to the server"""\n        self.c.logger.info(f"Paho on_publish callback for Message ID (mid): {mid}")\n\n    def on_log(self, client, userdata, level, buf):\n        """Callback to record log messages"""\n        self.c.logger.info(f"on_log callback. Level: '{level}'; msg buf: '{buf}'")\n\n    def try_connecting(self, broker, port, keepalive, try_x_times=20):\n        """Try connecting up to 20 times before raising an error"""\n        counter = 0\n        while True:\n            counter += 1\n            try:\n                self.client.connect(broker, port=port, keepalive=keepalive)\n            except Exception:\n                x_more_times = try_x_times - counter\n                if x_more_times == 0:\n                    raise\n                self.c.logger.exception(f"Problem connecting. Will try again {x_more_times}...")\n                time.sleep(0.1)\n            else:\n                break\n\n    def connect(self, set_will=True):\n        """Connect to the message broker server"""\n\n        client_id = mqtt.base62(uuid.uuid4().int, padding=22)\n        self.client = mqtt.Client(client_id=client_id, clean_session=True)\n\n        # Set a will to be sent by the broker in case the client disconnects unexpectedly.\n        # This must be called before connect() to have any effect.\n        # topic: The topic that the will message should be published on.\n        # payload: The message to send as a will. If not given, or set to None a\n        # zero length message will be used as the will.\n        if set_will:\n            if self.aws_thing is None:\n                self.c.logger.warning("set_will is True but there is no self.aws_thing, so it can't happen")\n            else:\n                topic_lwt = f'last_will/{self.aws_thing.upper()}'\n                payload_lwt = json.dumps({"connected": 0})\n                self.client.will_set(topic_lwt, payload=payload_lwt, qos=1, retain=False)\n\n        # We MUST use this on_connect callback to set the client.connected_flag = True.\n        # Otherwise we'll be in an infinite loop\n        self.client.on_connect = self.on_connect\n        self.client.on_disconnect = self.on_disconnect\n\n        # Enable logging using the standard python logging package.\n        # This may be used at the same time as the on_log callback method\n        # If logger is specified (default logger=None), then that logging.Logger object will be used;\n        # otherwise one will be created automatically\n        self.client.enable_logger(logger=self.c.logger)\n        # self.client.enable_logger(logger=None)\n        # Set the log level, if logger=None in enable_logger()\n        self.client._logger.setLevel(logging.DEBUG)\n        self.client.on_log = self.on_log\n        # The client will automatically retry connection.\n        # Between each attempt it will wait a number of seconds between min_delay and max_delay\n        # When the connection is lost, initially the reconnection attempt is delayed of min_delay seconds.\n        # It's doubled between subsequent attempt up to max_delay.\n        # The delay is reset to min_delay when the connection complete (e.g. the CONNACK is received,\n        # not just the TCP connection is established).\n        self.client.reconnect_delay_set(min_delay=1, max_delay=5)\n        # Set the maximum number of messages with QoS>0 that can be part way through their network flow at once.\n        # Defaults to 20. Increasing this value will consume more memory but can increase throughput\n        self.client.max_inflight_messages_set(10)\n        # Set the maximum number of outgoing messages with QoS>0 that can be pending in the outgoing message queue.\n        # Defaults to 0. 0 means unlimited. When the queue is full, any further outgoing messages would be dropped.\n        self.client.max_queued_messages_set(0)\n        # Set the time in seconds before a message with QoS>0 is retried, if the broker does not respond.\n        # This is set to 5 seconds by default and should not normally need changing. \n        self.client.message_retry_set(2)\n\n        if self.set_on_message:\n            self.client.on_message = self.on_message\n        if self.set_on_publish:\n            self.client.on_publish = self.on_publish\n\n        # Initialize client.connected_flag = False\n        self.client.connected_flag = False\n        self.c.logger.info(f"Connecting to broker: {self.mqtt_broker}")\n        self.root_ca, self.device_cert, self.private_key = get_certs(self.c, self.cert_dir)\n        self.client.tls_set(\n            ca_certs=self.root_ca,\n            certfile=self.device_cert,\n            keyfile=self.private_key,\n            cert_reqs=ssl.CERT_REQUIRED,\n            tls_version=ssl.PROTOCOL_TLSv1_2,\n            ciphers=None,\n        )\n        self.try_connecting(\n            self.mqtt_broker,\n            port=8883,\n            keepalive=60,\n            try_x_times=20\n        )\n\n        # We must start the loop before the while not client.connected_flag loop\n        self.client.loop_start()\n\n        # If we are not connected yet, wait a bit, then try again before returning the client\n        while not self.client.connected_flag:\n            seconds_to_sleep = 0.05\n            self.c.logger.info(\n                f"Waiting {seconds_to_sleep} seconds, then checking client.connected_flag again"\n            )\n            time.sleep(seconds_to_sleep)\n
\n

Simple code for sending a message to AWS IoT:

\n
# The metrics_dict is a Python dictionary with 2.5 KB of key/value pairs\npayload = json.dumps({"metrics": metrics_dict})\n\ninfo = paho_container.client.publish(\n    topic,\n    payload,\n    qos=1,\n)\n
\n

Linux Network Manager (nmcli) command to create GSM cellular internet connection:

\n
sudo nmcli radio wwan on\nsudo nmcli c add type gsm ifname '*' con-name 'my_conn' apn 'pda.bell.ca' connection.autoconnect yes ipv4.dns '8.8.8.8 8.8.4.4'\nsudo nmcli c up\n
\n

EDIT Jan 26, 2021:

\n

output from ifconfig tun0 command for OpenVPN connection (I've changed the IP addresses):

\n
tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500\n        inet 172.27.abc.def  netmask 255.255.248.0  destination 172.27.abc.def\n        inet6 fe80::4597:4b9f:abcd:efgh  prefixlen 64  scopeid 0x20<link>\n        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 100  (UNSPEC)\n        RX packets 9235  bytes 2655505 (2.6 MB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 9768  bytes 3329110 (3.3 MB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n
\n

output from ifconfig wwp0s21f0u4i5 command for GSM cellular connection, which shows the MTU is 1500 bytes (I've changed the IP addresses):

\n
wwp0s21f0u4i5: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500\n        inet 174.90.ghi.jkl  netmask 255.255.255.248  destination 174.90.ghi.jkl\n        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)\n        RX packets 11784052  bytes 2475908779 (2.4 GB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 12009517  bytes 2104615202 (2.1 GB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n
\n

Output from nmcli c s user_apn (I've changed the IP addresses):

\n
$ nmcli c s user_apn\nconnection.id:                          user_apn\nconnection.uuid:                        05ebc6d3-4fbb-4ddb-93fd-25fb57314ca2\nconnection.stable-id:                   --\nconnection.type:                        gsm\nconnection.interface-name:              cdc-wdm0\nconnection.autoconnect:                 yes\nconnection.autoconnect-priority:        0\nconnection.autoconnect-retries:         -1 (default)\nconnection.auth-retries:                -1\nconnection.timestamp:                   1611695541\nconnection.read-only:                   no\nconnection.permissions:                 --\nconnection.zone:                        --\nconnection.master:                      --\nconnection.slave-type:                  --\nconnection.autoconnect-slaves:          -1 (default)\nconnection.secondaries:                 --\nconnection.gateway-ping-timeout:        0\nconnection.metered:                     unknown\nconnection.lldp:                        default\nconnection.mdns:                        -1 (default)\nipv4.method:                            auto\nipv4.dns:                               8.8.8.8,8.8.4.4\nipv4.dns-search:                        --\nipv4.dns-options:                       ""\nipv4.dns-priority:                      0\nipv4.addresses:                         --\nipv4.gateway:                           --\nipv4.routes:                            --\nipv4.route-metric:                      -1\nipv4.route-table:                       0 (unspec)\nipv4.ignore-auto-routes:                no\nipv4.ignore-auto-dns:                   no\nipv4.dhcp-client-id:                    --\nipv4.dhcp-timeout:                      0 (default)\nipv4.dhcp-send-hostname:                yes\nipv4.dhcp-hostname:                     --\nipv4.dhcp-fqdn:                         --\nipv4.never-default:                     no\nipv4.may-fail:                          yes\nipv4.dad-timeout:                       -1 (default)\nipv6.method:                            auto\nipv6.dns:                               --\nipv6.dns-search:                        --\nipv6.dns-options:                       ""\nipv6.dns-priority:                      0\nipv6.addresses:                         --\nipv6.gateway:                           --\nipv6.routes:                            --\nipv6.route-metric:                      -1\nipv6.route-table:                       0 (unspec)\nipv6.ignore-auto-routes:                no\nipv6.ignore-auto-dns:                   no\nipv6.never-default:                     no\nipv6.may-fail:                          yes\nipv6.ip6-privacy:                       -1 (unknown)\nipv6.addr-gen-mode:                     stable-privacy\nipv6.dhcp-send-hostname:                yes\nipv6.dhcp-hostname:                     --\nipv6.token:                             --\ngsm.number:                             *99#\ngsm.username:                           --\ngsm.password:                           <hidden>\ngsm.password-flags:                     0 (none)\ngsm.apn:                                wrmstatic.bell.ca.ioe\ngsm.network-id:                         --\ngsm.pin:                                <hidden>\ngsm.pin-flags:                          0 (none)\ngsm.home-only:                          no\ngsm.device-id:                          --\ngsm.sim-id:                             --\ngsm.sim-operator-id:                    --\ngsm.mtu:                                auto\nproxy.method:                           none\nproxy.browser-only:                     no\nproxy.pac-url:                          --\nproxy.pac-script:                       --\nGENERAL.NAME:                           user_apn\nGENERAL.UUID:                           05ebc6d3-4fbb-4ddb-93fd-25fb57314ca2\nGENERAL.DEVICES:                        cdc-wdm0\nGENERAL.STATE:                          activated\nGENERAL.DEFAULT:                        yes\nGENERAL.DEFAULT6:                       no\nGENERAL.SPEC-OBJECT:                    --\nGENERAL.VPN:                            no\nGENERAL.DBUS-PATH:                      /org/freedesktop/NetworkManager/ActiveConnection/1\nGENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/Settings/1\nGENERAL.ZONE:                           --\nGENERAL.MASTER-PATH:                    --\nIP4.ADDRESS[1]:                         174.90.123.456/29\nIP4.GATEWAY:                            174.90.123.457\nIP4.ROUTE[1]:                           dst = 174.90.123.452/29, nh = 0.0.0.0, mt = 700\nIP4.ROUTE[2]:                           dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000\nIP4.ROUTE[3]:                           dst = 54.218.161.180/32, nh = 174.90.186.221, mt = 0\nIP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 174.90.186.221, mt = 700\nIP4.DNS[1]:                             70.28.245.227\nIP4.DNS[2]:                             184.151.118.254\nIP4.DNS[3]:                             8.8.8.8\nIP4.DNS[4]:                             8.8.4.4\nIP6.GATEWAY:                            --\n
\n

EDIT Jan 27, 2021 at 9:00 a.m. MST: Output from traceroute --mtu <broker>

\n

Trying to figure out if this is a packet fragmentation issue related to the MTU of 1500 and the fact that MQTT messages start to fail around the 1.5 KB size, and almost always fail at the 2.5 KB size.

\n
$ traceroute --mtu abcdefg-ats.iot.us-west-2.amazonaws.com\ntraceroute to abcdefg-ats.iot.us-west-2.amazonaws.com (52.43.abc.def), 30 hops max, 65000 byte packets\n 1  172.27.abc.def (172.27.abc.def)  78.980 ms F=1500  75.051 ms  77.459 ms\n 2  ec2-50-112-abc-def.us-west-2.compute.amazonaws.com (50.112.abc.def)  101.733 ms ec2-34-221-abc-def.us-west-2.compute.amazonaws.com (34.221.abc.def)  78.166 ms ec2-50-112-abc-def.us-west-2.compute.amazonaws.com (50.112.abc.def)  93.053 ms\n 3  * * *\n 4  * * *\n 5  * * *\n 6  * * *\n 7  * * *\n 8  * * *\n 9  * * *\n10  * * *\n11  * * *\n12  * * *\n13  * * *\n14  * * *\n15  * * *\n16  * * *\n17  * * *\n18  * * *\n19  * * *\n20  * * *\n21  * * *\n22  * * *\n23  * * *\n24  * * *\n25  * * *\n26  * * *\n27  * * *\n28  * * *\n29  * * *\n30  * * *\n
\n

EDIT Jan 27, 2021 at 9:50 a.m. MST showing output of ping commands:

\n

When I ping the AWS IoT message broker with 1300 bytes, it gets through every time:

\n
$ ping -c 3 -s 1300 52.43.abc.def\nPING 52.43.abc.def (52.43.abc.def) 1300(1328) bytes of data.\n1308 bytes from 52.43.abc.def: icmp_seq=1 ttl=253 time=87.7 ms\n1308 bytes from 52.43.abc.def: icmp_seq=2 ttl=253 time=99.7 ms\n1308 bytes from 52.43.abc.def: icmp_seq=3 ttl=253 time=106 ms\n
\n

However, when I ping the broker with 1400 bytes (1.4 KB), it times out! Why?

\n
$ ping -c 3 -s 1400 52.43.abc.def\nPING 52.43.abc.def (52.43.abc.def) 1400(1428) bytes of data.\n\n--- 52.43.163.79 ping statistics ---\n3 packets transmitted, 0 received, 100% packet loss, time 2051ms\n\n
\n

EDIT Jan 27 at 13:00 MST showing ip route show output:

\n

@hardillb asked if the "default route" changed due to OpenVPN (tun0 interface) starting. I wasn't sure what that meant at first, but now I think OpenVPN does change the default route. See the following ip route show output that references tun0 (the OpenVPN network interface):

\n
$ ip route show\n0.0.0.0/1 via 172.27.abc.def dev tun0\ndefault via 10.74.abc.def dev wwp0s21f0u4i5 proto static metric 700\n10.74.abc.def/30 dev wwp0s21f0u4i5 proto kernel scope link src 10.74.abc.def metric 700\n54.218.abc.def via 10.74.abc.def dev wwp0s21f0u4i5\n128.0.0.0/1 via 172.27.abc.def dev tun0\n169.254.0.0/16 dev enp2s0 scope link metric 1000 linkdown\n172.27.abc.def/21 dev tun0 proto kernel scope link src 172.27.abc.def\n192.168.2.0/24 dev enp2s0 proto kernel scope link src 192.168.2.2 metric 100 linkdown\n
\n", "Title": "Python Paho MQTT 2.5 KB messages not sending for days, while 0.1 KB messages send fine", "Tags": "|mqtt|aws-iot|linux|paho|python|", "Answer": "

There may be paths with a MTU of 1280 in the way. Try 1100 or 1200 and see if that fixes it.\nIf not, try to find an MTU finder app to try to find the MTU to your destination from your source.

\n" }, { "Id": "5530", "CreationDate": "2021-02-11T16:22:28.600", "Body": "

I'm working on a project that might grow up to several thousands of things. The management is on AWS. My lack of knowledge is about practical operations, like the creation of the things.

\n

Currently we need to:

\n
    \n
  1. power up the board to get (via serial, display, WiFi, etc...) the ID
  2. \n
  3. on AWS create a new thing with this ID
  4. \n
  5. create the certificates as well
  6. \n
  7. download the certificates and copy them to the board
  8. \n
  9. add the new thing to our system's database
  10. \n
  11. assign the new thing to the commercial stuff (i.e. customer, plant, etc...)
  12. \n
\n

Please note that each board is configured and installed at office. So we known in advance which is the final customer/plant.

\n

Of course this procedure is time consuming and error prone. It works only for few prototypes. But it cannot work when the numbers grow.

\n

Is there a way to automate this? For another customer - where I had the whole control of the code, so no AWS but a plain and simple PHP backend - I did something like this:

\n
    \n
  1. On power up the board checks if it was already configured
  2. \n
  3. If not, sends its own ID to a local webserver
  4. \n
  5. If the ID is unknown, it's added to the database ("thing" created) in a special table of the newly discovered things
  6. \n
  7. If there are systems that are waiting for boards, the software assigns the new discovered things to them and put the record in another table
  8. \n
  9. The board updates itself downloading the configuration from a REST service
  10. \n
\n

It worked fine but there were two main characteristics that now I don't have:

\n\n

Please, would you help me to understand how one should approach these simple operations in a Could-based environment?

\n", "Title": "Create and manage thousands of \"things\"", "Tags": "|aws|product-design|", "Answer": "

Why not let it have an initial certificate so that the first message can be "Hello. I'm up and my serial number is abcd." ?\nIn the cloud, you can then look up your sales/fulfillment database and send it the other info about what it is configured as and for which customer.\nFrom that point on, the device can behave appropriately, with that configuration, the first message being a confirmation that it is so configured.

\n

If no info can be found in your fulfillment database, the device has nothing to do until next power on or perhaps tries in a few hours.

\n" }, { "Id": "5538", "CreationDate": "2021-02-15T16:47:00.847", "Body": "

I'm trying to connect on a broker with the following script

\n\n
#!/user/bin/env python\n\nimport paho.mqtt.client as mqtt\nimport random\nimport requests\nimport warnings\nimport LoggingManager\n\nlogger = LoggingManager.log_setup()\nlogger.info("Start Working")\n\nmy_client_id = f'python-mqtt-{random.randint(0, 1000)}'\n# method = "websockets"\nmy_endpoint = "wss://psmart-test-api.aegean.gr/ws/"\n# port = 443\n# keepalive = 60\nkey = "ea75d54ea85b54ba5cde22ffb31090f9b2cbaeba8ad6eef1be575bfae89f56d3"\nurl_login = "https://psmart-test-auth.aegean.gr/auth/refresh_session"\n\nwarnings.filterwarnings('ignore', message='Unverified HTTPS request')\n\n\ndef myNewlogin(key, url_login):\n    loginobj = '''{ "token": "%s"  }''' % (key)\n    loginHeaders = {'Content-Type': 'application/json'}\n    login_response = requests.request("POST", url_login, data=loginobj, headers=loginHeaders, verify=False)\n\n    table = {}\n    table["status_code"] = login_response.status_code\n\n    if (login_response.status_code == 200):\n        result = login_response.json()\n        mytoken = result["token"]\n        table["token"] = mytoken\n    else:\n        table["token"] = login_response.text\n    return (table)\n\n\nis_connected = False\n\n# The callback for when the client receives a CONNACK response from the server.\ndef on_connect(client, userdata, flags, rc):\n    print("Connected with result code " + str(rc))\n    global is_connected\n    is_connected = True\n    # Subscribing in on_connect() means that if we lose the connection and\n    # reconnect then subscriptions will be renewed.\n    # client.subscribe("$SYS/#")\n    # client.subscribe("org/TERIADE/K4XKF6UT/Temperature")\n\n\n# The callback for when a PUBLISH message is received from the server.\ndef on_message(client, userdata, msg):\n    print(msg.topic + " " + str(msg.payload))\n\n\ndef on_disconnect(client, userdata, rc):\n    print("Disconnected")\n\n\n# authenticate to take username\n# user = str(myNewlogin(key, url_login))\n# print(user)\n\nlogin = myNewlogin(key, url_login)\n\nif (login["status_code"] == 200):  # if we can login\n    # print("mylogin 200")\n    logger.info("mylogin 200")\nelse:\n    # print(login)\n    logger.info("mylogin" + str(login["status_code"]))\n\n\n# Create a client instance\nclient = mqtt.Client(my_client_id, transport='websockets')\n\n# Register callbacks\nclient.on_connect = on_connect\nclient.on_message = on_message\n\n# client.tls_set(ca_certs="https://psmart-api.aegean.gr/roots.pem")\n\n# client.tls_set()\n\n# Set userid and password\nclient.username_pw_set(login["token"])\n\n# Connect\nclient.connect(my_endpoint, 443, 60)\n\nclient.loop_start()\n\npublish_interval = 5\nvalue = 0\nwhile 1 == 1:\n    if is_connected == True:\n        print('hello')\n    else:\n        print("still waiting for connection")\n    time.sleep(publish_interval)\n\n\n# Blocking call that processes network traffic, dispatches callbacks and\n# handles reconnecting.\n# Other loop*() functions are available that give a threaded interface and a\n# manual interface.\n# client.loop_forever()\n\n# while 1:\n    # Publish a message every second\n    # time.sleep(1)\n
\n

and I'm getting the following error

\n
Traceback (most recent call last):\n  File "mqtt_client_psmart.py", line 91, in <module>\n    client.connect(my_endpoint, 443, 60)\n  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 941, in connect\n    return self.reconnect()\n  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 1075, in reconnect\n    sock = self._create_socket_connection()\n  File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 3546, in _create_socket_connection\n    return socket.create_connection(addr, source_address=source, timeout=self._keepalive)\n  File "/usr/lib/python3.7/socket.py", line 707, in create_connection\n    for res in getaddrinfo(host, port, 0, SOCK_STREAM):\n  File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo\n    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\nsocket.gaierror: [Errno -2] Name or service not known\n
\n

I can't find a solution. Any ideas?\nThanks in advance

\n

After @hardillb help, I made the following changes:

\n
# Configure network encryption and authentication options. Enables SSL/TLS support.\nclient.tls_set()\n\n# Set userid and password\nclient.username_pw_set(login["token"])\n\n# Set websocket connection options.\nclient.ws_set_options(path="/ws", headers=None)\n\n# Connect\nclient.connect('psmart-test-api.aegean.gr', 443, 60)\n
\n

The code is running now... I get still waiting for connection but that another issue....

\n", "Title": "Connect to Mqtt over websockets", "Tags": "|mqtt|raspberry-pi|web-sockets|python|", "Answer": "

The client.connect() function takes a hostname, not a URL as it's first argument.

\n

The failure is because the client is trying to resolve wss://psmart-test-api.aegean.gr/ws/ as a hostname.

\n

You should just be passing psmart-test-api.aegean.gr as the first argument of the function.

\n

To change the path section of the connect URL then you will need to use the ws_set_options(self, path="/mqtt", headers=None) See the docs here

\n" }, { "Id": "5550", "CreationDate": "2021-02-17T13:47:17.007", "Body": "

I have 10,000 machines (to make jute-bags), and each machine needs to send data to a remote server every 2 seconds. The data consists of machine id, machine state and various other information collected from different sensors.

\n

I have successfully integrated the ESP8266 WiFi module with the machine. It collects all necessary data, and transmits it to the router, which then uses the internet to send data to my remote server.\nAs a demonstration to the factory management, I successfully sent data of 10 machines, every 2 seconds.

\n

But now, I have to do the same task for 10,000 machines.

\n

I have absolutely no idea as to how much bandwidth I need and how many machines I can connect to a single router?

\n

If my question seems incomplete, please feel free to ask me more questions.

\n", "Title": "How many machines do I connect to a single WiFi router?", "Tags": "|esp8266|wifi|routers|", "Answer": "

We're lacking quite a bit of data, but let's run a few numbers.

\n\n

Also consider that those 10K devices at 1 message every 2 seconds means your server receives 5K messages per second. If the server is just forwarding them to another connection, fine. If you are storing in a database, we're talking thousands of IOPS. By far not impossible with decent SSD-backed storage, but not on you run-of-the-mill single HDD of course.

\n

Also remember that if that data comes about 1 Kbyte on disk per message (very possible if you include indexes, etc, possibly much more depending on your DB schema), you're writing 10K * 1024 * 86400 / 2 = 442 Gigabytes per day!

\n

But let's consider the upstream and server parts are sorted.

\n

You'll never run 10K devices on a single AP. It's just not possible. Most "consumer" APs will choke beyond a few dozen devices. The best ones will stop at a few hundred in the best of conditions (using multiple radios on multiple bands). That's just for maintaining state, keys, and so on, we're not even talking about the devices communicating yet.

\n

The ESP8266 operates only in the 2.4 GHz band. That band is quite small and also quite busy. We'll consider the case of "most countries" (the US has more restrictions). You only have 3 non-overlapping channels in 802.11b or 4 in 802.11g/n at 20 MHz.

\n

I don't think you'll be able to have more than about a 100 devices in 2.4 GHz on a single AP. So we're already reaching 100 APs!

\n

But with only 2 to 4 non-overlapping channels, the risk of interference between neighbouring devices and APs is probably too great for things to run reliably.

\n

Remember that the speeds quoted for Wi-Fi (72 Mbit/s max for the ESP8266) are just raw stream data rates. They do not take into account overhead, preamble, ACKs, RTS/CTS frames, guard intervals, inter frame space, collisions, retransmits... When sending very short frames, the overhead can quickly use a vast majority of the airtime, so you can't compare 72 Mbit/s with the 40 Mbit/s quoted above.

\n

Remember as well you'll have (at least) TCP ACKs, and 802.11 is half-duplex.

\n

At this scale, I'm not sure what the right solution is. My first instinct would go towards Ethernet (and possibly PoE), but this is a massive setup (that's quite a few switches, and quite a lot of cables).

\n

A probably better option is to have a single ESP8266 with sensors for multiple machines to reduce the number of devices, but I have no idea if that's possible in your case.

\n" }, { "Id": "5556", "CreationDate": "2021-02-24T03:44:43.893", "Body": "

Problem

\n

I have an Amcrest IP camera set up which records to Synology Surveillance Station (SS) when triggered by motion detecting, with motion tracking enabled. I've set up my regular recording profile (record only when motion is detected) and Home Mode (no recording at all).

\n

What I would like to achieve is that when I\u2019m in Home Mode, not only does the camera not record, but it also doesn\u2019t track any motion (i.e. pan or tilt) at all. Ideally, it would be even better if I could tell at a glance whether the camera was currently active or not (mostly for the benefit of house guests).

\n

As a last resort I might buy a smart plug and use IFTTT to trigger it to shut the camera's power off, but I would prefer to avoid adding another device to the mix.

\n

What I've tried

\n

I\u2019ve set up Action Rules in SS where the trigger is entering (or leaving) Home Mode, and the action is to disable (or enable) the camera. When I enter Home Mode I can see that this has the effect of disabling the camera, but this applies on the SS side only - so the camera still actively detects and tracks motion.

\n

Is there a way that I can configure the Action Rule to disable the camera itself (either by disabling motion tracking, or powering it off)?

\n", "Title": "How to stop Amcrest camera from tracking motion while Surveillance Station is in Home Mode", "Tags": "|surveillance-cameras|", "Answer": "

I\u2019ve pieced together the solution which is fairly straightforward, just poorly documented.

\n

Firstly, I discovered Amcrest's Privacy Mode which is exactly the option that I want to toggle using my Action Rule.

\n

Amcrest has a partially documented API but this doesn\u2019t mention Privacy Mode.

\n

However, the missing piece to the puzzle is this Amcrest user who found the right syntax by trial and error:

\n
http://youramcrestip/cgi-bin/configManager.cgi?action=getConfig&name=LeLensMask\nhttp://youramcrestip/cgi-bin/configManager.cgi?action=setConfig&LeLensMask[0].Enable=true\nhttp://youramcrestip/cgi-bin/configManager.cgi?action=setConfig&LeLensMask[0].Enable=false\n
\n

Lastly, you can call this using the Webhook option as the Action in the SS Action Rule:

\n

\"Webhook

\n" }, { "Id": "5563", "CreationDate": "2021-02-25T14:41:39.867", "Body": "

I wish to use TASMOTA to replace my code that when triggered ( button or sensor ) to operate not as Toggle but staring a timeout, for example, IR sensor input triggers ON for 15 minutes.

\n

Is is possible ?

\n", "Title": "Using TASMOTA for timeout operations", "Tags": "|tasmota|", "Answer": "

Answered in Tasmota's github. Generally, using PulseTime

\n" }, { "Id": "5576", "CreationDate": "2021-03-03T06:16:59.680", "Body": "

I want to distribute an IoT product commercially. The product is powered by LiPo battery (3.7V nominal) \u200ewhich is charged by solar panel (6V, 2W). \u200e\nMy question is can I ship the product (internationally) with LiPo battery included? Are there any \u200eregulations against it? What is the usual practice in similar case?\u200e\nDo the same regulations apply to LiFePo4 batteries also?\u200e

\n", "Title": "Can I ship the product (internationally) with LiPo battery included?", "Tags": "|batteries|", "Answer": "

You can ship as so many electronic devices shipped internationally, so that you too can but do it with proper safety measures, if you done that then you are free to grow your business, or else https://batteryshipping.cc/how-to-ship-lithium-battery/?gclid=CjwKCAiAp4KCBhB6EiwAxRxbpNrD85WQahUOLKAwmY6aSeHGSkdeDNl47_JduF41f0Z2PS9QrE1e9xoCB6EQAvD_BwE this guys can ship your battery internationally, contact them

\n" }, { "Id": "5581", "CreationDate": "2021-03-04T08:51:40.333", "Body": "

I would like to know whether there are any restrictions/rules when developing a product which is compatible with Google/Alexa or maybe even other hubs. Do I need to get some license or follow some rules?

\n", "Title": "Developing a product for commercial use compatible with Google Home/Alexa", "Tags": "|smart-home|alexa|google-home|", "Answer": "

Google Documentation includes a check list which has the ToS

\n

https://developers.google.com/assistant/smarthome/develop/launching

\n

Especially read the first item on the list on that page.

\n

For Amazon you should probably start here:

\n

https://developer.amazon.com/en-US/alexa/devices/connected-devices/business-resources

\n

And look at "Certification and Badging" section

\n" }, { "Id": "5589", "CreationDate": "2021-03-06T16:45:36.117", "Body": "

I'm looking to dive into home automation starting from a Google Nest Mini device.
\nI understand that there has been a Home Mini and then the second iteration was rebranded as Nest. But a retailer site recommended by Google India has two generations of Google Nest Minis with two different prices - Nest Mini and Nest Mini (2nd Gen). Also, they do list Home Mini separately.
\nI cannot figure out the difference between Nest Mini and Nest Mini 2nd gen. Every article/video I read/watch compares Nest with Home without suggesting that there's a 2nd Gen Nest Mini.

\n

If there's, I'd love to be directed towards a link where I can understand the difference. I'm very confused.
\nTIA!

\n", "Title": "Is there a difference between Google Nest Mini and Google Nest Mini 2nd gen?", "Tags": "|smart-home|google-home|", "Answer": "

According to Wikipedia the rebranded one is the 2nd Generation one. So there's just a Home Mini (1st gen) and a Nest Mini (2nd gen) but nothing in between. I also know from my colleagues at work who are doing integration with these products that they only have those two types of Minis. On the page there are also the differences between the two. Seems your retailer there was a bit lazy in putting the correct names on these products.

\n" }, { "Id": "5595", "CreationDate": "2021-03-09T15:25:13.833", "Body": "

I am designing an embedded Linux device that acts as a gateway for sensor data. In some cases this device will use a back-haul that I don't control (e.g. customer supplied cell or cable modem). I assume this device will be behind some sort of NAT and that network parameters will be assigned via DHCP in most installations.

\n

For typical communications, the device will open up sockets with a REST web service.

\n

I would like to add some secure diagnostic, command, and control capabilities. What I want is for each gateway to be running something like an SSH server. That way I could have a very flexible interface that a human can use on demand to communicate with a misbehaving gateway and troubleshoot or fix what is going wrong. This presents some obvious security issues, but I am more interested in talking about the networking hurdles I need to overcome at this point:

\n
    \n
  1. My customers would need to open a port on their router and forward port that my device, or my device needs to be directly connected to the internet and not behind NAT.
  2. \n
  3. If my device is behind NAT it will probably need a static IP
  4. \n
  5. The customer will need a static IP for their network, or we'll need to capture dynamic IP's through our REST interface.
  6. \n
\n

How can I hole punch SSH?

\n
    \n
  1. Can I make each gateway act like an SSH client and open a session with one of our servers, is there some way to transfer or takeover the SSH session?
  2. \n
  3. Can I tunnel the data using an IP-in-IP connection or similar?
  4. \n
\n", "Title": "How to use SSH server on a remote device and overcome routing issues", "Tags": "|networking|protocols|linux|", "Answer": "

Rather than describing how to cover all your wishes, I'll act as if there's one precise question. (All your wishes may be covered in the article I recently found here )

\n

The precise question I'll answer is:\nI have a linux gateway that gathers sensor data.
\nI have shell access to that gateway logged in as "gatewayuser"
\nI want to be able to access that gateway by ssh from "somewhere on the internet"
\nHow do I do that?

\n

OK, now we're working from some fixed criteria. You're right, SSH and tunneling are the answer. You probably can't or don't want to expose your sensor gateway as a fixed place on the internet. (At a minimum, your router will probably make it difficult.)

\n

So, you want to use SSH reverse forwarding
\nssh -R

\n

If you want your sensor gateway to be able to continually try to establish this connection, you'll want the other end to be a static ip address or at least unchanging host name.

\n

Say you use some cheap VPS or developer-tier (free) Amazon AWS (or other) instance.\nSay that this public host is assigned the IP address 111.222.33.44 and you created an account 'clouduser' Say this host has port 2222 available for use.

\n

From your linux gateway, you would enter the command

\n
sudo ssh -R 2222:localhost:22 clouduser@111.222.33.44\n
\n

Technically you will log into 111.222.33.44, and if you're going to run this in a crontab, that's not necessarily what you want.

\n

Now, from that cloud server 111.222.33.44, assuming you've ssh'ed into it as clouduser, you can then run

\n
ssh -p 2222 localhost -l gatewayuser\n
\n

There are ways to secure this better (bind addresses come to mind) and make it so you don't have to log into the gateway first (crontab comes to mind). I found that blog article did a pretty good job, so I'm going to conclude my simple "do this one thing" reply and refer you off to those resources.

\n" }, { "Id": "5630", "CreationDate": "2021-03-26T22:05:51.757", "Body": "

Context: I am sending temp/humidity data from an NCD.io sensor through a gateway device to Azure, where it is being stored in the IoT Hub, somewhere.

\n

\"enter

\n

Problem: Where is this data? I can access it through the device twin & also see it being sent via the Azure CLI, but the data logged in the IoT Hub only conveys information about the gateway message itself, not the actual data. The screenshot is a chart of messages received to date; I can go step deeper and glean insights about the messages but nothing about the sensor data.

\n

\"enter

\n

Can this information only be accessed via another service? Is there no way to view data in the Hub?

\n

Looking for clarification, tips/tricks. Thanks!!

\n", "Title": "Where is my sensor data being stored in the IoT Hub?", "Tags": "|azure|", "Answer": "

Azure IoT Hub doesn't act as Telemetry database for querying the data. Indeed it's storing the data for the specified time defined in the retention policy but you don't have a direct access to it from the portal.\nTo get the data you can do the following -

\n
    \n
  1. Create a Message Route on the Device Telemetry Messages that will route your messages to an external storage. https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-d2c
  2. \n
  3. Create an Azure Function with IoT Hub trigger that will get the messages and then you can programmatically route them. https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-iot-trigger?tabs=csharp
  4. \n
  5. Use tools like Device Explorer - https://docs.microsoft.com/en-us/azure/iot-pnp/howto-use-iot-explorer
  6. \n
  7. Use the EventProcessorHost library to get the data https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-event-processor-host
  8. \n
\n" }, { "Id": "5636", "CreationDate": "2021-04-01T14:07:07.797", "Body": "

Here's my setup:

\n

My Internet at home comes from a 4G modem, because that's the best deal in my area in terms of price/speed. However, my speed is reduced to 3 Mbps after using 150 GB per month, but that's usually fine as I usually stay below 150. It does happen that I have to endure slow speeds for a few days during the last days of the month, though.

\n

I use a Google Nest Wifi with one router connected by Ethernet to the 4g modem, and one more Google wifi (also acts as speaker) to enhance the signal.

\n

I also have 6 Google speakers in various rooms throughout the apartment (3 Nest Audios, 1 Home Max, 1 Home Mini, and 1 original Home.)

\n

I have made those 7 speakers into one group, and quite often I use Spotify to play on the entire group so they music plays in sync in the entire apartment.

\n

Now, my question is: Will this consume more Internet data than simply casting to one speaker? For example, let's say I stream an album that would usually require 100 MB of data. I would assume that when this plays on all 7 speakers, this is only transferred once from the Internet, then transmitted on my local network to all the speakers.

\n

Is that true though? After all, most people's data plans on home WiFi are unlimited, so it's not entirely unthinkable to me that Home was designed so that each speaker streams the music from the Internet. I see in the Google WiFi app that all speakers seem to have consumed considerable amounts of data during a month, so this makes me wonder.

\n

So while I assume the former, I would like to ask whether someone understands the inner workings of the Home/Nest system well enough to give me a concrete answer here.

\n", "Title": "Will casting music to a Google Home speaker group consume more Internet data than casting to a single speaker?", "Tags": "|google-home|audio|", "Answer": "

I looked around for an answer, and while it seems unlikely that you will get an authoritative answer without doing the experiment yourself, there is this reddit thread which indicates that the music is only streamed to one of the devices (the master), which then streams to the other devices.

\n

It is debated whether a device will always take the stream from the master or simply from the device it that will give it the best quality (which in most cases would be the master), but it seems pretty unanimous that the music is only streamed once, meaning that your 6 speakers will not consume 6 times more data than 1 speaker.

\n

There is one thread on the Google Nest help which asks for this information, but unfortunately the answer is notoriously unclear... the community specialist says,

\n
\n

The only way to stream music on Google Home devices at the same time is by playing it on a group of speakers and that is consider [sic] as one streaming.

\n
\n

Grammar aside, it sounds as though it is a single stream being broadcast to the group, but it's not clear.

\n" }, { "Id": "5647", "CreationDate": "2021-04-06T03:08:46.697", "Body": "

I read here:

\n
\n

Network bandwidth is the capacity of a network communications link to transmit the maximum volume of data from one point to another over a computer network or Internet connection in a given amount of time, usually one second. Bandwidth has the same meaning of capacity, and defines the data transfer rate.

\n
\n

Does this mean that the higher the bandwidth, the (generally) more likely that your router can withstand a DDoS?

\n

And what is a typical bandwidth (in Gbps) that most routers have? If someone stresses my router with a 1Gbit/sec attack, how much bandwidth (or whatever it is that an attack\u2019s effectiveness depends on) would I need to withstand it with 0 issues? If it doesn\u2019t completely override my router to the point that it\u2019s useless, would it give me lag or something?

\n

NOTE: if there\u2019s a better SE I could ask this on, please redirect me to it

\n", "Title": "Does a DDoS attack\u2019s effectiveness depend on your bandwidth?", "Tags": "|ip-address|", "Answer": "

There are many forms of (distributed) denial of service attacks. If some are based just on bandwidth, many will impact the service way before any link is saturated.

\n

If any request sent to your server(s) takes 100 ms of CPU time to execute, for instance, each CPU/core will at most be able to process 10 such requests per second. Even if you have 10 servers with 8 cores each, then 800 requests per second will saturate your infrastructure. Even counting 10 KB per request, that\u2019s just about 64 Mbit/s. Even if you have Gbit/s links everywhere, your service will still be unavailable or very slow.

\n

Others may saturate your servers by just keeping connections open. They don\u2019t even need to use any CPU, just keep the connection open as long as possible. Depending on the server software and settings, this may bring your service to a halt quite quickly.

\n

So it all really depends on the type of attack and the servers and their setup.

\n" }, { "Id": "5649", "CreationDate": "2021-04-06T20:59:55.100", "Body": "

I'm about to pull the trigger on buying a Nest Wifi https://store.google.com/ca/product/nest_wifi.

\n

So before I spend 200$ on that... how can I verify that my current modem can handle the Nest Wifi?

\n

Are the cables from the modem to the router universal?

\n", "Title": "Nest Wifi - What to look for to make sure your current modem is compatible with a Nest Wifi device?", "Tags": "|wifi|nest|", "Answer": "

If your cable modem has an open ethernet port you will be good to go. The nest wifi becomes the router/mesh network so if you have a modem/router combo you may need to put it in bridge mode.

\n" }, { "Id": "5667", "CreationDate": "2021-04-19T12:41:04.243", "Body": "

I'm making a graduation project where I need to connect a smart plug to a web server through WAN (obviously the smart plug and the server aren't on the same network).

\n

In other words, I want to be able to turn the smart plug on/off through my website and have the smart plug send power readings to my server's API.

\n

Most smart plugs out there connect to their manufacturer's API. I want to buy a smart plug that I can configure/program to connect to my own API.

\n

Are there any smart plugs out there that support that?

\n

I would appreciate any more suggestions or information because I'm kinda lost.

\n

EDIT: I did an extensive google searching, most methods out there :

\n
    \n
  1. reverse engineering the API between the android app and the manufacturer's server and calling the manufacturer's server to control the smart plug, or using IFFT which also calls the manufacturer's server.

    \n
  2. \n
  3. Creating a script on the local network that receives commands from WAN and sends commands to the smartplug through it's LAN API , but obviously this is not what I need as it requires that you run the script on a device (raspberry pie) 24 hours.

    \n
  4. \n
  5. There are also upnp and port forwarding methods that allow you to use the local network API on WAN but that's also not an option for security reasons.

    \n
  6. \n
\n

I need a smart plug that has power monitoring capabilities that I can make it connect to my own server.

\n
    \n
  1. I read about flashing a custom firmware (tasmota) on a smartplug (sonoff) to make it connect to an MQTT broker on WAN. I think this satisfies what I need, I can install an MQTT brocker on my server and make the smartplug connect to it, but this method is kinda hacky and I don't wanna risk. I also don't know if it supports power monitoring or not.
  2. \n
\n

there's a smart plug called MOKO SMART MK112 that supports MQTT and power monitoring out of the box, but the problem is that I couldn't get it from china due to COVID19.

\n

If you know any good resources to do that or any smartplug that supports MQTT as well as power monitoring I or any other suggestions would really appreciate it.

\n", "Title": "Connect a smartplug to a custom web API", "Tags": "|smart-plugs|", "Answer": "

Thank you for helping, I've found what I needed and that is the Shelly SmartPlug

\n

Shelly SmartPlug supports both, a documented Cloud API (Through shelly servers) so you don't need to intercept or reverse Engineer anything.
\nas well as support for MQTT protocol.
\nyou will need to enable it first through its web interface, enter your MQTT broker IP and port and you're good to go.

\n

It uses an ESP8266 chip, running Mongoose OS, You can also flash tasmota on it on the air without soldering.

\n

It has three ways of communications
\n1.Local HTTP API
\n2.MQTT
\n3.Cloud API

\n

Here's my plan :
\nfirst I will press the power button on the shelly smart plug
\nit will open a wifi network with a local HTTP server that you can use to configure it.
\nthrough my custom Android app, I will call that server API, pass my Wifi credentials to it, enable MQTT, Add my MQTT Server IP and port to it and that's it!

\n

I can then use its local HTTP server to communicate with it through LAN or use MQTT to communicate with it through WAN.

\n

https://shelly.cloud/products/shelly-plug-smart-home-automation-device/

\n

Local API: https://shelly-api-docs.shelly.cloud/_review/mqtt/#shelly-plug
\nhttps://shelly-api-docs.shelly.cloud/#shelly-plug-plugs

\n

Cloud API: https://shelly.cloud/documents/developers/shelly_cloud_api_access.pdf

\n

Tasmota profile: https://templates.blakadder.com/shelly_plug_S.html

\n

Flashing: https://youtu.be/_TSJB_IzxG0

\n" }, { "Id": "5705", "CreationDate": "2021-05-06T20:52:41.393", "Body": "

I have an LTE modem connected to my mcu. It's a simcom7600 modem.

\n

I am trying to find the IP and MAC address of the modem (I need that, so that I can hook up lwip and start communicating to the outside world).

\n

I achieved a few things already:

\n\n

However, getting the IP address and MAC address seem a more daunting task.

\n

What I have tried is :

\n
AT+CGPADDR=1\nSending command: AT+CGPADDR=1\n[CR][NL]+CGPADDR: 1,xxx.xxx.xxx.xxx[CR][NL][CR][NL]OK[CR][NL]\n
\n

xxx.xxx.xxx.xxx obviously shows a real IP address, I think it's best not to share that online ;-)

\n

Could anybody confirm this is the IP address which is visible to the outside world? Because when I try to ping or trace it, it seems to be unreachable.

\n

For MAC address I found this:

\n

\"enter

\n

But when I try that, it always errors out.

\n
AT+CWMACADDR?\nSending command: AT+CWMACADDR?\n[CR][NL]ERROR[CR][NL]\n
\n

For what it's worth: I CAN ping from the modem to e.g. google. So I know I am connected.

\n
AT+CPING="www.google.com",1\nSending command: AT+CPING="www.google.com",1\n[CR][NL]OK[CR][NL]\n[CR][NL]+CPING: 1,216.58.211.100,64,299,255[CR][NL]\n[CR][NL]+CPING: 1,216.58.211.100,64,288,255[CR][NL]\n[CR][NL]+CPING: 1,216.58.211.100,64,287,255[CR][NL]\n[CR][NL]+CPING: 1,216.58.211.100,64,277,255[CR][NL]\n[CR][NL]+CPING: 3,4,4,0,277,299,287[CR][NL]\n
\n

Who can confirm that the approach to obtain my IP address is correct?\nWho can give me some guidance how to get the modems MAC address?

\n

Many thx in advance!!

\n

AT+CWMACADDR?

\n", "Title": "Getting IP and MAC address of LTE modem using AT commands", "Tags": "|communication|", "Answer": "

As mentioned in the comments

\n

A LTE radio is unlikely to have a MAC address that is of any use to you (it will have similar ids on the cellular network e.g. IMEI). MAC addresses in LWIP's context are normally for use with Ethernet networks. I suggest you probably want to use the LWIP PPP mode. Docs here

\n

As for the IP address, any IP address that is available to the LTE modem is most likely going to be from a Private IP address range and be behind a CGNAT gateway as IPv4 addresses are in increasingly short supply and to get a fully routable address you will be paying a premium. If you want a publicly accessible device you really should be looking at IPv6 support these days.

\n" }, { "Id": "5708", "CreationDate": "2021-05-09T17:52:53.147", "Body": "

This is probably asked several times before, but my websearch skills can't really direct me to an adequate answer.
\nI have a thingy, that takes commands from a mobile application with a request-response protocol.
\nAlso from time to time it sends notifications to the mobile app.
\nUntil now, I used RFCOMM, but iOS doesn't really seem to support it. So I had to switch the underlying implementation. So far I found that the GATT profile might be what I need, but I'm not sure if it is the right direction. Or if it is achievable at all.
\nSo what profile/setup should I dig into to implement this communication pattern?

\n", "Title": "BLE client server communication", "Tags": "|bluetooth|bluetooth-low-energy|", "Answer": "

The exact setup depends on what you want or need to do, but the basic scheme is that:

\n\n

This is a scheme that will work with all BLE devices with an appropriate app running on them.

\n

Sometimes you can just advertise and not bother with the service and characteristics. Sometimes you use pre-defined services and characteristics. And there are probably other combinations.

\n" }, { "Id": "5711", "CreationDate": "2021-05-11T13:48:22.410", "Body": "

I am having a difficult time figuring out how to prototype in the IOT realm. This mainly stems from the accessibility of components and modules; and the size of them. Here is what I have done so far.

\n\n

So my problem is that the Quectel EP06 is too big and I dont know my next step after this POC prototype. Essentially I need to fit the final solution into less than 18mm by 18mm. All the SOC needs is low power LTE and global positioning like GNSS.

\n

I've searched online and the only chip that looks like it will work is the qualcomm 920x series.

\n

Does anyone know the process to prototype with a chip like this? Am I going down the right path here? I am new relatively new to prototyping and very new to working with SOC type chips.

\n

Thanks

\n

Adam

\n", "Title": "IOT prototyping", "Tags": "|hardware|", "Answer": "

There are different possible stages, though the details will vary a lot on what you are trying to achieve, what is on the market, your technical and financial means, expected volume, etc.

\n\n

Note that the minimum quantities involved usually increase along the way: dev boards are sold by the unit, modules sometimes require slightly higher quantities, the chips themselves may have very high minimum order quantities (MOQ). 100K pieces to order directly from the manufacturer is common, and for some chips it's the only way to get them (others you can get for any quantity, but the price may be quite different depending on the quantity).

\n" }, { "Id": "5716", "CreationDate": "2021-05-14T16:50:38.477", "Body": "

Goal:

\n

To build a gateway using a RaspPi 4 which will receive the BLE data from the sensor and be able to see this on my mobile remotely anywhere.

\n

Sensor:

\n

Temperature & Humidity BLE (T201), you use an APP called "SensorsPro" to receive the data from the sensor when within range and you can sync the data to produce a up to date graph which logs the data. The sensor itself has a 20 day memory approx in which you can open the app and sync the data.\nWhen you open the app the data takes a few seconds to start flowing again with a live Temp & Humidity readings. The app can also connect and read multiple sensors, each had a unique MAC address starting with [A4:C1:38:::**].

\n

Why:\nI use these to monitor my vivariums with creatures inside, each needs to be within a temp & humidity range and i'd like to be able to monitor remotely and receive alerts if the temp range spikes or the humidity drops so I can activate the appropriate cooling fans, misting system, turn off the lights in spikes of hot weather etc.

\n

What i need help with:

\n
    \n
  1. I have been able to put an old galaxy s6 into dev mode and use the app from the manufacturer and receive data and logged it via the btsnoop_hci.log file and exported to Wireshark but i can't seem to translate this data after searching for hours i'm struggling to read the data for one to be able to convert it into a temperature and humidity reading plus device battery level.

    \n
  2. \n
  3. The T201 sensor is always advertising but connecting via a mobile ble scanner or gatttool disconnects frequently without being able to do much more, i need to find a way to mimic the app via the RPi and send the data to my pi which i could then use / convert with once i know what data i'm receiving and use python (optimisically) and then forward to Mosquitto / Node Red / MQTT Dashboard to read the data and then work out a IFTTT or Telegram notification to alert me if the temperature / humidity is out of

    \n
  4. \n
\n

Any help would be appreciated, i'm a beginner in linux and python so certainly have a challenge ahead.

\n

I looked into OpenHab too but without the data from the sensor i feel thats a no go too at present.

\n

Thank you in advance for any contributions towards my project.

\n

18/05/21:

\n

https://easyupload.io/kdc6gp - 30 second of BLE Data

\n

https://streamable.com/bgqo5v - Video of the 30 seconds to see the temperature, humidity, battery level live against the 30 seconds to see how it changes.

\n

[A4:C1:38:3A:07:3A] - Top reading,\n[A4:C1:38:C0:01:E1] - Bottom reading

\n", "Title": "BLE Sensor Gateway & RaspPi Project", "Tags": "|raspberry-pi|bluetooth|", "Answer": "

The manufacturer advertising data looks like this:

\n
01 01 a4 c1 38 3a 07 3a 01 07 08 ce 25 89 62 00 01\n?? ?? ---MAC address--- ?? ?? Temp- -Hum- Ba ?? ??\n
\n\n

That leaves us 6 unknown bytes, but all the rest (which I believe includes all the info you need) is pretty clear.

\n

You would have to capture more traffic to see if those remaining values change.

\n" }, { "Id": "5719", "CreationDate": "2021-05-16T20:47:21.847", "Body": "

What is the maximum number of AWS IOT Things that can be concurrently connected to the IOT broker? Or what is the maximum number of topics that can be created per AWS account?

\n

So for example is it possible let us say to have a product, and the expected production for example is more than 200,000 (two hundred thousands). Can I add those 200.000 things in the AWS account to keep tracking of all of these products and all of them can be connected and subscribed/publishing concurrently to the broker? Would that be possible?

\n

I tried to find that info on AWS, but couldn't.

\n", "Title": "What is the maximum number of IOT things of topic per AWS account?", "Tags": "|aws-iot|aws|", "Answer": "

There is no known limit to the number of IOT things that can be concurrently connected. The broker is obviously able to have enough IP addresses and ports to allow virtually unlimited incoming connections. It is also a cluster so that you wont bog down the rules engine, etc. AWS has some good experience with large scale systems. Besides, you'll be paying for each connection, though a small amount ! I'm sure if you try hard enough, there is a way to cause trouble in the rules engine and get slapped on the wrist/banned. But for someone who's just doing their work, that is unlikely.

\n

Topics are dynamic. i.e, until something subscribes to a topic or something publishes to a topic, the broker just has no knowledge of it. You dont "create" a topic.

\n

Now, from a device perspective, there is a limit of how many subscriptions a connection can make. I think it was 50 (my knowledge is from some time ago. Please check.)\nAlso, from an account perspective, there is a limit of the number of messages that can go on per second, and the number of connections per sec, subscriptions per sec etc. Some of these are soft limits and some of these are hard limits.\nMany of these dont usually affect you unless there is some event that causes a bunch of devices to reconnect at the same time. (E.g: an AWS datacenter has a power glitch that causes a reboot of all the machines, or there's an outage at the cellular provider that gets fixed suddenly and all devices connect at the same time. Or, there's a power outage across Texas that gets fixed suddenly!).\nBut if you will have say, 200k devices that come up and connect randomly between say, 7 and 9AM and will send a couple of messages every few seconds, you will not come close to any limits. Keep an eye on the costs, though.

\n" }, { "Id": "5731", "CreationDate": "2021-05-23T23:23:23.987", "Body": "

I bought a smart switch to control my room lamp. It is a JWCOM Smart SA-01 model with very little to no documentation available on the internet (at least I wasn't able to find anything useful). The problem is that whenever I connect the device on the wi-fi network it either disconnects and keeps trying to reconnect or it kicks my phone from the network. Since the only two devices connected to my wi-fi network are my phone and this switch, I think that it might be consuming too much band and the router has no other option to kick one of them off of it.

\n

I contacted the seller about this issue and they didn't give a valid answer, saying that the problem might be occurring due to the wi-fi signal being too weak, which is not the case because the switch is literally less than a meter from the router.

\n

Since I don't know the problem, do you guys know a Linux command that allows me to check for the band usage on my wi-fi router? Has anyone run into a similar problem before and, if so, what did you do to fix it? Since my router doesn't provide band control by MAC address, is there a way to control the network usage of this device from my computer somehow?

\n

Thanks in advance

\n", "Title": "\"Smart\" switch keeps disconnecting from wi-fi network", "Tags": "|networking|wifi|linux|", "Answer": "

The way a got it to work was to buy a separate router and connect only the switches to it. Sometimes one of them still keeps disconnecting but it is better than having my phone kicked from the network all the time.

\n" }, { "Id": "5740", "CreationDate": "2021-05-29T07:32:37.927", "Body": "

In what scenario is it possible that collision happens between two packets.\nAnd if a collision happens, is it possible to demodulate? How?\nRegarding here it says there are two types of interference. Intra and internetwork for unlicensed networks. I suspect when a node from outside network interferes with signals inside network, it is recoverable. But how?\nThanks.

\n", "Title": "LoRaWAN collision and demodulation", "Tags": "|lorawan|", "Answer": "

I am trying to answer all your questions one after the other below.

\n
\n

In what scenario is it possible that collision happens between two packets.

\n
\n

If at the location of the receiver the signal of two or more LoRaWAN radio packets interfere so that the receiving device is not able to demodulate that packet that was addressed to it, we say that the packet was lost due to collision.

\n
\n

And if a collision happens, is it possible to demodulate?

\n
\n

If the signal of two LoRaWAN radio packets interfere at the location of the receiver, it may still demodulate the one that was addressed to it.

\n
\n

How?

\n
\n

A radio packet that is facing collision with another radio packet at the receiver can be demodulated with high likelihood under the following conditions:

\n\n

There is only a little interference between neighbouring radio channels and LoRaWAN modulations of different spreading factors are orthogonal.

\n
\n

Regarding here it says there are two types of interference. Intra and internetwork for unlicensed networks. I suspect when a node from outside network interferes with signals inside network, it is recoverable. But how? Thanks.

\n
\n

Your referred link (here) is talking about inter-system/intra-system interferences and not inter-network/intra-network interferences that you mentioned in your question. It is important to make a difference between the two scenarios.

\n\n" }, { "Id": "5744", "CreationDate": "2021-06-01T19:57:38.123", "Body": "

I want to use my PN532 as input device like a keyboard on my pc with windows 10. I have a PN210 USB to UART Bridge.

\n

Is there a way to use my PN532 on a x86/64 device?

\n", "Title": "Is it possible to connect a PN532 with CP210 to Windows x86/64?", "Tags": "|rfid|microsoft-windows|", "Answer": "

If you want to use it \u201cas a keyboard\u201d, i.e. whenever a card is presented some data from the card is sent to the PC as if it were typed on a keyboard, they I don\u2019t believe you can do that with this combination:

\n\n

If you really want an NFC reader acting as a keyboard you\u2019ll probably need a device that has been designed for that (there are quite a few). If you want to do it yourself, you can probably use the PN532 but you\u2019ll need something a bit more intelligent to connect it to the PC, some kind of microcontroller with USB peripheral support which allows use of the HID profile and an UART to talk to the PN532.

\n

Some Arduino boards as well as some of the Teensy boards should be able to do that.

\n

In any case, you\u2019ll need to determine exactly what you want to read from the tags and output as keystrokes. A classic case is the card\u2019s UID, but other applications may require reading different information from the card, possibly with encryption and whatnot.

\n

If you don\u2019t care about it actually acting as a keyboard, but just want to communicate with the PN532, then the combination should work out. Note that IIRC the PN532 has multiple interfaces (SPI, I2C, UART I believe), and some ready-made boards are preset to a specific interface. Other will require cutting or soldering some jumpers to select the right mode.

\n

Note that on PCs, the optimal use case is generally to have a reader which is compatible with the PC/SC protocol. This allows you to use all sorts of software which can work with any such reader.

\n" }, { "Id": "5762", "CreationDate": "2021-06-20T15:41:38.957", "Body": "

I'm stumped on this one - everything appears to be correctly configured and working properly...except that HomeAssistant can't discover any ZigBee devices. Adding the ConBee II appears to work just fine (recognized under the mapped address as below):

\n

\"Screenshot

\n

...but when I go to add a device, this spins until it times out after a few minutes:

\n

\"screenshot

\n

This is a fresh install Raspbian, with the following docker-compose.yml (I've tried the commented out options, as well):

\n
version: '3'\nservices:\n  homeassistant:\n    container_name: homeassistant\n    # image: homeassistant/home-assistant:stable\n    image: homeassistant/raspberrypi4-homeassistant:stable\n    ports:\n      - "8123:8123"\n    volumes:\n      - ./config:/config\n      - /etc/localtime:/etc/localtime:ro\n    devices:\n      # - /dev/ttyACM0:/dev/ttyACM0\n      - /dev/ttyACM0\n    restart: unless-stopped\n    # network_mode: host\n    # privileged: true\n
\n

This is the debug output from docker which writes immediately when I click on Add Device in the HomeAssistant configuration for ConBee:

\n
homeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.zigbee.application] Sending Zigbee broadcast with tsn 1 under 2 request id, data: b'013c00'\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Command Command.aps_data_request (17, 2, 0, <DeconzAddressEndpoint address_mode=1 address=65532 endpoint=None>, 0, 54, 0, b'\\x01<\\x00', 2, 0)\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Send: 0x12130018001100020001fcff00003600000300013c000200\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x121300090002002202\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] APS data request response: [2, <DeviceState.APSDE_DATA_REQUEST_SLOTS_AVAILABLE|2: 34>, 2]\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x0e14000700aa00\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Device state changed response: [<DeviceState.128|APSDE_DATA_REQUEST_SLOTS_AVAILABLE|APSDE_DATA_INDICATION|2: 170>, 0]\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Command Command.aps_data_indication (1, 1)\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Send: 0x1714000800010001\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x17140021001a002201fcff0102000000000036000300013c0000afdd873601001e\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] APS data indication response: [26, <DeviceState.APSDE_DATA_REQUEST_SLOTS_AVAILABLE|2: 34>, <DeconzAddress address_mode=ADDRESS_MODE.GROUP address=0xfffc>, 1, <DeconzAddress address_mode=ADDRESS_MODE.NWK address=0x0000>, 0, 0, 54, b'\\x01<\\x00', 0, 175, 221, 135, 54, 1, 0, 30]\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy.zdo] [0x0000:zdo] ZDO request ZDOCmd.Mgmt_Permit_Joining_req: [60, <Bool.false: 0>]\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] 'aps_data_indication' response from <DeconzAddress address_mode=ADDRESS_MODE.NWK address=0x0000>, ep: 0, profile: 0x0000, cluster_id: 0x0036, data: b'013c00'\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x0e15000700a600\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Device state changed response: [<DeviceState.128|APSDE_DATA_REQUEST_SLOTS_AVAILABLE|APSDE_DATA_CONFIRM|2: 166>, 0]\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Command Command.aps_data_confirm (0,)\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Send: 0x04150007000000\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x04150012000b00220201fcff00e100000000\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] APS data confirm response for request with id 2: e1\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Request id: 0x02 'aps_data_confirm' for <DeconzAddressEndpoint address_mode=ADDRESS_MODE.GROUP address=0xfffc endpoint=None>, status: 0xe1\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.zigbee.application] Error while sending 2 req id broadcast: TXStatus.MAC_CHANNEL_ACCESS_FAILURE\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Command Command.write_parameter (2, <NetworkParameter.permit_join: 33>, b'<')\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Send: 0x0b160009000200213c\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.uart] Frame received: 0x0b16000800010021\nhomeassistant    | 2021-06-20 11:05:07 DEBUG (MainThread) [zigpy_deconz.api] Write parameter permit_join: SUCCESS\n
\n

Things I've tried:

\n
    \n
  1. Deleting the Integration in HomeAssistant and re-adding the device
  2. \n
  3. RMing the docker setup and starting fresh
  4. \n
  5. The commented out parameters in the docker-compose file
  6. \n
  7. Changing the zigbee channel to 24 (via ./config/configuration.yaml
  8. \n
  9. Pairing multiple different device types (I've tried both Leviton switches and Aqara door sensors, always in discoverable mode)
  10. \n
  11. Connecting the ConBee to a powered USB hub to rule out any USB3 or wifi interference (although the pi is a client on a 5GHz wifi network)
  12. \n
  13. Googling the MAC_CHANNEL_ACCESS_FAILURE error extensively, which appears to be a red herring
  14. \n
\n", "Title": "Can't Connect to ZigBee Devices (Raspberry Pi4, HomeAssistant, Docker-Compose, ConBeeII)", "Tags": "|raspberry-pi|zigbee|home-assistant|docker|", "Answer": "

USB3.0 interference is real! I had 2 fit-size USB 3.0 drives plugged directly into the pi USB3 ports, with the ConBee II plugged in next to them in the USB 2.0 port, and that completely prevented the ConBee from being able to pair with zigbee devices. What's more surprising is that moving the ConBee to a powered USB hub (with 3+ feet of distance from the Pi) didn't make a difference - I still couldn't pair with any Zigbee devices. Also note that this is NOT a power issue - I'm using the official CanaKit 3.5a power supply.

\n

As soon as I removed the two USB thumbdrives, the ConBee was able to pair with all 7 of my devices.

\n" }, { "Id": "5764", "CreationDate": "2021-06-22T12:37:11.490", "Body": "

I have been trying to connect my ESP32 board to my laptop via MQTT. I have installed Mosquitto MQTT broker on my laptop but I fail to connect my ESP32 every time. This is the test code I am using to check MQTT connection.

\n
#include <WiFi.h>\n#include <PubSubClient.h>\nconst char* ssid = "......";              //WiFi Name\nconst char* password = "......";      //WiFi Password\nconst char* server= "xxx.xxx.xx.xx";    //RPi or Machine IP on which the broker is\n\nWiFiClient espClient;\nPubSubClient client(espClient);\n\nint setup_WiFi(){\n  delay(10);\n  // We start by connecting to a WiFi network\n  Serial.println();\n  Serial.print("Connecting to ");\n  Serial.println(ssid);\n\n  WiFi.begin(ssid, password);\n\n  while (WiFi.status() != WL_CONNECTED) {\n    delay(500);\n    Serial.print(".");\n  }\n  Serial.println("");\n  Serial.println("WiFi connected");\n  Serial.println("IP address: ");\n  Serial.println(WiFi.localIP());\n  Serial.print("Attempting MQTT connection...");\n  client.connect("esp32");      \n  if (client.connect("esp32")){                 \n      Serial.println("connected");\n    }\n    else {\n      Serial.print("failed, rc=");\n      Serial.println(client.state());\n    }\n    return 0;\n}\n\nint reconnect() {\n  unsigned long startAttemptTime = millis();\n  while (!client.connected())  \n    {\n    Serial.println("Attempting MQTT connection...");\n    // Attempt to connect\n    if (client.connect("esp32")){\n      Serial.println("connected");\n    } \n    else {\n      Serial.print("failed, rc=");\n      Serial.println(client.state());\n      Serial.println(" try again in 5 seconds");\n      // Wait 5 seconds before retrying\n      delay(5000);\n    }\n  }\n  return 0;\n}\n\nint send_mqtt(){\n  setup_WiFi();\n  char sss[15]="Hello World";\n   if (!client.connected()){\n    reconnect();\n  } \n  client.publish("esp32/test", sss);    //send message\n  WiFi.disconnect(true);\n  Serial.println("Sent");\n  return 0;\n}\nvoid setup() {\n  Serial.begin(115200);\n  client.setServer(server, 1883);   //mqtt server details\n  setup_WiFi();\n  reconnect();\n    }\n\nvoid loop() {\n  send_mqtt();\n  delay(10000);     //Wait 10 secs before next transmission\n}\n
\n

But each time I get the error

\n
\n

failed, rc=-2

\n
\n

I have been trying to do the same for the past 2 months or so but to no success. Meanwhile, I have searched the internet extensively, to see what was it that I am doing wrong. I make sure that both the devices are on the same LAN.

\n", "Title": "ESP32 MQTT error", "Tags": "|mqtt|esp32|", "Answer": "

Mosquitto is only local (on the same computer) if no custom config is used, do you use a custom config?

\n

What OS do you have on the laptop?

\n

Under listener in the config example found here: mosquitto/mosquitto.conf.

\n

On row 216 add listener 1883 0.0.0.0 to allow outside of the computer connections.

\n" }, { "Id": "5778", "CreationDate": "2021-06-26T13:36:41.023", "Body": "

I would like to create a point-to-point connection between a LoRa temperature sensor and a M5 Stack (ESP32) with LoRa module. However, I am a beginner with LoRa, so I have a few questions:

\n

Are all packets send with LoRa (not LoRaWAN) encrypted? Or does it depend on the producer of the LoRa sensor?\nCan the content of the packet received by the M5 Stack be viewed? (If I understand correctly, with the LoRaWAN the content can only be viewed after it is on the server).\nCan I send measured temperatures from multiple LoRa sensors to one M5 Stack? If yes, how could I distinguish from which sensor the packet has been sent?

\n

Any help would be appreciated!

\n", "Title": "LoRa point-to-point communication", "Tags": "|esp32|lora|lorawan|", "Answer": "

Raw LoRa is just about sending raw data using the LoRa modulation. There's no encryption, there are no acks, no counters, no device identifiers, nothing. All of this, happens in upper layers, usually the LoRaWAN layer.

\n

If you want encryption on a raw LoRa link, unless the providers of the LoRa modules have added they their own, you'll have to do it yourself, with all the associated caveats (it's harder than you think, as was shown by WEP, though the levels of traffic on a LoRa link make some of the attacks on WEP unlikely).

\n

In LoRaWAN encryption indeed happens between the end device and the LNS (network server).

\n

If you use raw LoRa without encryption, you will most definitely be able to read any data sent by the end devices

\n" }, { "Id": "5780", "CreationDate": "2021-06-27T18:23:14.323", "Body": "

I would like to measure temperatures using a LoRa device and establish a point-to-point connection with another LoRa compatible device. However, the only appropriate temperature sensors use LoRaWAN, which requires a network server, which I am trying to avoid. Is there a way to enable a LoRaWAN sensor device to establish a point-to-point connection and receive packets that don't have to go through a network server?

\n

Thanks in advance!

\n", "Title": "Sending LoRa packets using LoRaWAN sensor device", "Tags": "|lora|lorawan|", "Answer": "

There's an alternative, but it depends a lot on what the other "LoRa compatible device is".

\n\n

Otherwise, as the other have pointed out, if you really want to use raw LoRa and no LNS at all , then you will need to modify the existing device. Depending on the device, the MCU, whether the firmware source is publicly available, etc, this may be very easy or very complex. Without any details about that device, it's very difficult to say more.

\n" }, { "Id": "5784", "CreationDate": "2021-06-29T13:11:27.473", "Body": "

I have two RaspberryPis, one with the mosquito broker, and one acting as a client. The client has set up a LWT with the broker. After a power outage that affects both RPis, both RPis come back online, but the LWT is never sent. I expected it would be sent since from the broker's point-of-view, the client is not longer receiving pings. I am wondering if this is because mosquito does not persist LWT to disk (all my other retain messages are present)? If so, can I change mosquito to allow this?

\n

My config looks like:

\n
# Place your local configuration in /etc/mosquitto/conf.d/\n#\n# A full description of the configuration file is at\n# /usr/share/doc/mosquitto/examples/mosquitto.conf.example\n\npid_file /var/run/mosquitto.pid\n\npersistence true\npersistence_location /var/lib/mosquitto/\n\nlog_dest file /var/log/mosquitto/mosquitto.log\n\ninclude_dir /etc/mosquitto/conf.d\nautosave_interval 300\n
\n", "Title": "Does mosquito broker persist LWT messages to disk, so they may be recovered between restarts?", "Tags": "|mqtt|mosquitto|", "Answer": "

LWTs are part of the session that mosquitto will persist to the db along with the rest of the session.

\n

But if both the pi's fail due to a powercut then the broker will never get the chance to send the LWT as both will go offline at the same time as the client, so the keepalive timeout will never be reached.

\n

It will not start the clock for any disconnected clients when it comes back online as keepalive is only valid for currently connected clients and no clients will be connected as it restarts.

\n

So to answer the last question, no there is no way to get the behaviour you are looking for.

\n" }, { "Id": "5788", "CreationDate": "2021-07-02T15:53:27.807", "Body": "

Preface: I\u2019d like to clarify that I understand what a relay is and that a PLC uses a fairly conventional microprocessor that only digitally establishes logical logic gate configuration as a digitally programmable alternative to relay banks for analog and/or (depending on the PLC) digital signals. My question is based on the understanding that to date actual logic gates (as far as I know) aren\u2019t non-locally programmable (\u201cre-wirable\u201d) without a person manually rewiring truly programmable actual (not logical programming of a statically wired microprocessor) logic gates.

\n

Rectenna work interests me specifically around any potential relevance of varying transmission wavelengths and material resistances (if this is not possible with MoS2, generally as a concept for other potential materials) to making possible remote switch activation of logically\u00a0chosen switches along an array. Essentially I am curious about if this or other research has potential for constructing truly physically reprogrammable (externally and maybe wirelessly) logic gates.

\n

In general any information on advances towards this capability would be appreciated as right now it seems like the only rudimentary build I could manage for my project is a 64 gate one. That\u2019s not great because anything less than 512 gates would be very hard to make useful for my proof of concept project, and I know there\u2019s no way I could get to a more ideal 262,144 gates.

\n

One example would be any publication which covers if the kind of uses of phase-engineered low-resistance contacts for ultrathin MoS2 transistors covered in the articles below would be able to be produced with varying resistance in a band usable for varying activation via radio waves for switches.

\n

https://doi.org/10.1038/nmat4080

\n

https://www.ece.cmu.edu/news-and-events/story/2019/05/rectennas-converting-radio-waves-into-electricity.html

\n

I\u2019m not picky if someone knows about other technological advances approaching this capability such as biochemical non-locally programmable switch activation equivalent processes. Thanks everyone.

\n

Update 1: My specific question is: Have there been any significant technological advances towards non-locally electrically programmable logic gates?

\n

Update 2: After further review I\u2019ve found that FPGAs are not what I am asking about. Their reprogramming like PLCs is digital not analog. They seem to just be a more generalized similar thing to PLCs rather than being factory equipment. I might incorporate one or more in my project, but they aren\u2019t what I am referring to which is true analog reprogramming. Why does analog matter? Analog means more efficient at the surface level, but it also allows structured logic similar to ladder logic at the hardware level which enables significantly different uses in structuring and restructuring logic execution.

\n", "Title": "Non-locally Electrically Programmable Logic Gates - Technological Advances Progress", "Tags": "|hardware|wireless|microprocessors|plc|", "Answer": "

Here's what I'd do. I would use a CPLD (maybe even a PLD would do).

\n

If you need to choose amongst a set of known logic trees, I would have it all pre-programmed and use some sort of wired or wireless or light based comm to select the right input pins, to select the right logic tree for other pins.
\nIf you need the logic to be changeable but cannot make a fixed set, then you'd need to compile the required logic into a JTAG programmable file and send the file over perhaps by wired or wireless or light based comms to the target and have something at the target to change the CPLD via JTAG.
\nIt has been done before.

\n

If you need some analog capabilities along with this, look for something like this one instead of the CPLD: https://www.cypress.com/products/32-bit-arm-cortex-m3-psoc-5lp.

\n

Beyond this, it is hard to read your mind to see what you're looking for!

\n" }, { "Id": "5796", "CreationDate": "2021-07-06T14:15:35.357", "Body": "

I'm doing a research on DDOS protections in MQTT brokers and I started with open source Mosquitto broker. I couldn't find any countermeasure listed in the documentation page.

\n

Since my C language knowledge is not that deep, I would like first ask this here before checking the source code. Are there any countermeasures against DDOS attacks in Mosquitto broker?

\n", "Title": "DDOS Protection in Open Source MQTT Brokers", "Tags": "|mqtt|mosquitto|", "Answer": "

DDOS protection should be at the network level, not in the application.

\n

By the time it's made it down the TCP/IP stack to the application, it's too late and your machine is already on it's knees

\n" }, { "Id": "5803", "CreationDate": "2021-07-07T11:30:57.657", "Body": "

I'm running an MQTT Broker on my Windows machine using Mosquitto broker. I have setup a client on the same Windows machine through the Paho MQTT python library. Another client is setup using the same python library on a Linux machine that I have.

\n

So, when I publish a file from the Windows client, it reaches the Linux client easily without any issues. However, when I tried to publish a file from my Linux client, it does not reach my Windows client. Later I noticed that if the file size is greater than 50KB, the file does not transfer from the linux machine. The log from the broker does not display the "Publish message recieved" if the filesize exceeds 50KB.

\n

The code for Windows machine to publish:

\n\n
import paho.mqtt.client as mqtt\nimport time\n\n\nclient = mqtt.Client("P2")\nclient.on_message=on_message\nprint("COnnecting")\nclient.connect("localhost")\nprint("Connected")\nf = open("D:/Downloads/test.txt",'rb')\nimagestring = f.read()\nbyteArray = bytearray(imagestring)\n\n\n\nprint("Publishing")\n\nclient.publish("photo", byteArray)\n
\n

The code for Linux machine to subscribe:

\n
import paho.mqtt.client as mqtt\nimport time\n\n\ndef on_message(mosq, obj, msg):\n  with open('test.txt', 'wb') as fd:\n    fd.write(msg.payload)\n    print("message topic=",msg.topic)\n\nclient = mqtt.Client("P1")\nclient.on_message=on_message\nprint("COnnecting")\nclient.connect("192.168.1.115")\n\nclient.loop_start()\n\nprint("Subbing")\nclient.subscribe("photo")\n\ntime.sleep(1000)\n\nclient.loop_stop()\n
\n

Code for Windows machine to subscribe:

\n
import paho.mqtt.client as mqtt\nimport time\n\n\ndef on_message(mosq, obj, msg):\n  with open('test.txt', 'wb') as fd:\n    fd.write(msg.payload)\n    print("message topic=",msg.topic)\n\nclient = mqtt.Client("P2")\nclient.on_message=on_message\nprint("COnnecting")\nclient.connect("localhost")\n\nclient.loop_start()\n\nprint("Subbing")\nclient.subscribe("photo")\n\ntime.sleep(1000)\n\nclient.loop_stop()\n
\n

Code for Linux machine to publish:

\n
import paho.mqtt.client as mqtt\nimport time\n\nclient = mqtt.Client("P1")\nprint("COnnecting")\nclient.connect("192.168.1.115")\nprint("Connected")\nf = open("/opt/plcnext/test.txt","rb")\nimagestring = f.read()\nbyteArray = bytearray(imagestring)\n\n\n\nprint("Publishing")\n\nclient.publish("photo", byteArray)\n
\n

Is there any restriction from the Linux machine side? or am I doing something wrong here?

\n", "Title": "Cannot transfer Files from linux machine to windows machine (via mqtt)", "Tags": "|mqtt|", "Answer": "

Your publishing code needs to run the client loop as well otherwise it will only send 1 TCP/IP packet which will limit the sending size to what ever the MTU is on the network.

\n

Adding the same timed period with the client loop running will work, but a better solution would be to use the single shot wrappers included in the Paho library that will ensure that the message is fully sent before closing the connection. The docs can be found here

\n\n
import paho.mqtt.publish as publish\n\nf = open("/opt/plcnext/test.txt","rb")\nimagestring = f.read()\nbyteArray = bytearray(imagestring)\n\npublish.single("paho/test/single", byteArray, hostname="192.168.1.115")\n
\n" }, { "Id": "5828", "CreationDate": "2021-07-16T01:59:29.993", "Body": "

I'm still very new to IoT and IoT protocols, so pardon my ignorance.

\n

I'm trying to go about creating a home automation system to control lights, read room temperatures etc.

\n

The user should be able to interact with the IoT devices via a mobile client that is being developed with React Native. Could I implement a MQTT client on that platform? Yes, but I think is way more convenient to have a REST API between the mobile client and the IoT devices. This way I can easily manage user-created routines, add authentication to control certain devices and implement a lot of other functionalities. Also, some devices may use different protocols so I'll let my API take care of this instead of implementing multiple clients on the front-end.

\n

I've found ways to "bridge" MQTT and HTTP transforming HTTP posts into MQTT publishes, but this isn't exactly an elegant solution. I want to use my server as a subscriber to MQTT topics and serve the information over HTTP to the mobile client, as well as publish topics when requests arrive or scheduled routines are triggered. How can I do this?

\n

I'll be preferably running the MQTT broker and the REST API on the same device (a RaspberryPi) connected to the local network.

\n", "Title": "How to make a MQTT Broker and a REST API communicate?", "Tags": "|mqtt|https|rest-api|", "Answer": "

Edit

\n

I've come across this stack overflow post. In case the link is dead: In short, the post describes a way to use JWT to authenticate a frontend user with your backend and mqtt broker.

\n

While it doesn't really cover your question, I think it still is useful. Specifically, it solves your problem of authenticating a client and controlling access to topics. The mosquitto broker, compiled with mosquitto-go-auth, would be an example of how you could implement the architecture described in the post.

\n

Further building on this architecture, to support multiple clients you could develop some sort of gateway from mqtt to other protocols. E.g. a backend service that connects to your mqtt broker and listens for messages on topic newprotocol. When it receives such message it translates the message content to your desired protocol. Your frontend clients would only need to speak http and mqtt then.

\n
\n

I'm trying also trying to build a DIY home automation system and I would very much appreciate it if you could keep this question up-to-date with your progress.

\n

That being said, I found a few useful links that you'll maybe also find useful:

\n\n

The gist of the article is:

\n
\n

Essentially, you start with a normal REST API and add MQTT messages for REST endpoints that result in a state change (POST/PUT/PATCH/DELETE).

\n
\n

Then add to this approach, anytime a IOT device changes state, publish the new state to a REST endpoint.

\n

What I've tried

\n

I hope you'll let me share my thoughts on how to build a system like this and I also hope that this'll be useful to someone. One way I thought about structuring this system is to have 3 components: Clients, REST-API and MQTT-Broker.

\n

In my case I have a react app running in the browser. It makes API calls to the REST backend (i.e. GET all IOT devices, POST new message to IOT device, etc.). The backend handles authentication, the PUB/SUB messaging with the broker (i.e. Handle IOT device status changes, find out what IOT devices exists, etc.) and any messages coming from the react app.

\n

However, I ran into problem with this approach. How do you let the react app know the state of an IOT device changed while using a stateless protocol (HTTP in this case)? I thought about using websockets to transmit changes to the react app but I ended up not implementing it and went back to the drawing board. And that's were I am now. Another issue with this is that I would be restricting myself to use MQTT for every IOT device. But what if a device doesn't talk MQTT?

\n

Redis

\n

Kalyanswaroop's answer is very insightful. I don't know much about redis but it sounds very interesting. Instead of using MQTT as the sole protocol, we could use redis' messaging system. The IOT devices publish their messages into the message queue of redis instead of directly to the REST-API. All messages from various protocols are stored in the database. Then the API would only need to talk to the database and any clients, e.g. a react app.

\n

To transmit changes in the database to a client in real-time, you could use websockets.

\n" }, { "Id": "5836", "CreationDate": "2021-07-19T13:31:30.200", "Body": "

I have a micro-controller,ESP8266 for the moment, which is connecting to a mqtt broker and pulish some messages.\nI have configured mqtt broker on raspberry pi which has an IP address 192.168.43.164.\nso for now I have hard coded the mqtt broker IP in micro-controller firmware. But I want it like it search the mqtt server in local network and then connect to the same.

\n

so is it possible to do the same ?? Does such micro-controller can have that capability to do so?\nI have already search a lot on Google but does not found any solution. Please provide some path.

\n

Thanks\nAbhishek

\n", "Title": "Searching local MQTT server on Local network", "Tags": "|mqtt|", "Answer": "

Although I have accepted @hardillb answer. But I am writing this answer to share the another approach that I came across when researching around this.

\n

Ok so idea is to use IP Broadcasting. So what I have done is, my edge device is continuously emitting an UDP messages over broadcast IP that contains the itself IP. The micro-controller ESP listen to UDP messages and extract IP from that and connect to it.

\n

So advantage is that if I have more then one edge device is in system, it is easy to allocate different edge to UC at run time.That keeps the load balancing too.

\n

Thanks

\n" }, { "Id": "5839", "CreationDate": "2021-07-22T15:41:02.913", "Body": "

Planning a one off party event that involves turning mains lights on/off and then playing media on a PC connected screen.

\n

It need not be very fast and doesn't need to use internet but WiFi is available if that route would be simpler/cheaper.

\n

It is easy enough to source dump 433mhz remote controlled sockets\nand rig the remote to fire when needed. But I am unsure how to go about detecting the transmission from remote to the socket to play the media on the PC.

\n

I imagine I would need some sort of USB transceiver but I haven't found any USB ones that I could drive under Linux. Most of the articles online use expansion boards for Pi or Arduino. I did find one article using a cheap USB dongle and others vaguely mentioning USB SDR radio.

\n

I do have a USB to TTL Serial Converter that I use for flashing, could I buy a 433 transceiver and rig it up to that?

\n

Alternatively I have a Wiimote lying around that could be rig up as the sensor and then use a WiFi plug that is accessible from Linux to turn on/off the lights?

\n", "Title": "Monitor plug socket remote from desktop PC", "Tags": "|remote-access|ac-power|", "Answer": "

If you have internet, it would be easiest to use a switch that can be controlled via IFTTT. Just google IFTTT light switch and you'll see the light !\nIf you dont have internet, you can still use a RaspberryPI with a relay shield and run an MQTT broker on a PC to which the raspberryPI will connect. And you can then run another MQTT program on the PC to send on/off messages to the PI. You could also locate the broker on the PI.\nMay also run a web server on the PI. Google can help there too !

\n" }, { "Id": "5876", "CreationDate": "2021-08-16T12:15:33.677", "Body": "

We know MQTT is based on TCP.

\n

Why don't we directly using plain TCP connection for live data? Recently I'm using plain TCP for realtime / live connection for transfer sensor data to another device through plain TCP connection in internet.

\n

Architecture that I make:

\n
IoTBoard -> Server(VPS)-> ClientDevice(eg: smartphone)\n
\n

This architecture looks similliar like MQTT:

\n
Publisher -> Broker -> Subscriber\n
\n

The format data I used looks like this in plain TCP in IoT board for one update data.

\n
device_id(defined by programmer)|meta_data|sensorA_val|sensorB_val|etc\n
\n

So basically it's just sending plain text to server

\n

So I'm using pipeline separator for every topic. Then the server handle it like filtering pipeline before send to client.

\n

Also I heard websocket is good for live data too, why it's not better than MQTT?

\n

Do you think what am I doing is the best?

\n", "Title": "Why MQTT protocol is better than Websocket or direct TCP socket for live data IoT?", "Tags": "|mqtt|", "Answer": "

What's "best" depends on the situation. However, here are the advantages:

\n
    \n
  1. [TCP, Websocket, MQTT] Connected session. There's a concept of a "connection" after which you can send data in either way.
  2. \n
  3. [Websocket, MQTT] Framing. When you want to send multiple messages, you need a way for the receiver to know where each message ends. With TCP, you have to write your own.
  4. \n
  5. [Websocket, MQTT] Security. With TCP, you have to either build your own security or use something like TLS (or HTTPS on top of TCP). With websockets, that comes as part of the standard. With MQTT, most providers (AWS, Azure, etc) provide security as part of the package. Note that AWS allows MQTT over websockets too !
  6. \n
  7. [MQTT] pubSub: Instead of a 1-1 connection, you may want an ability to send from one, but receive by various/multiple receivers, depending on the topic. MQTT allows the concept of a broker as part of the standard for this. There is the concept of topics and routing in the broker. Some even support persistent messages, etc.
  8. \n
  9. [MQTT] LastWillAndTestament: One more feature of pub/sub. Look it up. Not all MQTT implementations support it.
  10. \n
  11. [MQTT] Ease of implementation: Since providers such as AWS, Azure provide IoT infrastructure, implementation is very easy. With TCP/Websocket, you'll have to setup the server. In that sense, HTTP may be easier than plain TCP/Websocket.
  12. \n
\n

What's your use case ?

\n" }, { "Id": "5879", "CreationDate": "2021-08-18T20:42:02.653", "Body": "

How does a smartwatch detect the worn handedness?

\n

I have googled, but I could not find any valuable information on the query. What is/are the hardware(s) and/or algorithm(s) smartwatches use to detect on which hand the user is wearing the device?

\n", "Title": "How does a smartwatch detect the worn handedness?", "Tags": "|wearables|", "Answer": "

With a 3 axis accelerometer in the watch, if the gravity vector is predominantly pointing at the right hand edge of the watch, it's on the left wrist and vice versa.

\n" }, { "Id": "5882", "CreationDate": "2021-08-20T11:55:51.710", "Body": "

We are moving our Lora-wan gateways and end-device from Loriot to another opensource network server.\nBut we are experiencing problems with some of our devices, when trying to rejoin using OTAA.

\n

What is the 'best practise' for connecting and joining end-devices to another network server? Are you always able to reset or rejoin devices using a downlink, or is there maybe another smart way of doing it?

\n", "Title": "(Re)Joining existing Lora-wan end-devices to another network server", "Tags": "|networking|lorawan|over-the-air-updates|", "Answer": "

Unfortunatelly, the LoRaWAN 1.0.x spec does not define any standard way for having end-devices rejoin the same or another LoRaWAN network and generate new session keys. This is a big operation issue. Just image what happened if your production Network Server lost (because of a datacenter disaster) all session keys. All devices would be disconnected without having any way to reconnect. (LoRaWAN 1.1 introduces a new message type: rejoin request, that solves this issue.)

\n

Device manufacturers usually implement proprietary solutions in the application layer of their firmware to let devices join the same or a new network after session keys are lost. The most common technic is that upon detecting that the Network Server stops answering to LinkADRReq MAC commands (for a certain period: e.g.: for 6 hours) , the device sends new JoinRequests until the new or recovered NS replies with a new JoinAccept so that new session keys can be created.

\n

According to my experience 90% of the device manufacturer (but not all of them) implement this technic. Therefore I suggest the following procedure to move end devices from the old NS to a new one.

\n
    \n
  1. Perform lab tests with all types of devices connected to your current network server and check if they really try to rejoin after a certain period of time not being connected. (You can remove the device from your current NS, and immediately provision it again so that you can be sure that it's session keys are deleted.)
  2. \n
  3. Remove all your devices from your current Network Server
  4. \n
  5. Provision all your devices on your new Network Server
  6. \n
  7. Visit those devices (<10%) that do not automatically rejoin upon not receiving LinkADRAns messages and restart them manually. (You must have identified all these devices during lab tests before you started the migration procedure)
  8. \n
  9. Wait until the rest of the devices rejoin to the new network.
  10. \n
\n

This procedure is far not ideal, but there is currently nothing better.\n(In theory you could copy the session keys device by device, from the old NS to the new NS but in practice it would require additional integration effort with your old and new NS vendor that would bee too costly.)

\n" }, { "Id": "5883", "CreationDate": "2021-08-21T12:29:02.733", "Body": "

I am trying to see the IPs of all the devices connected to my WIFI. Everyone suggests to use the command arp -a in the terminal but when I do so I get always the same list of devices. I try to connect new ones but they do not show up. However if I connect to the router administration site (192.168.1.1) in the browser, I can see all the devices I am looking for.

\n

How can I get all devices in the terminal using arp -a? Is there an alternative to get the complete list?

\n", "Title": "Why arp -a does not show all devices connected to WIFI?", "Tags": "|networking|wifi|wireless|routers|ip-address|", "Answer": "

It can, but there's a ~5 minute timeout on unused records in the ARP table. So entries will timeout.

\n

You can probably extend that timeout, but a quicker answer is just to sweep through the IP network first.

\n
fping -a -g 192.168.1.0/24  ;  arp -an | grep -v incomplete\n
\n

Grepping out the incomplete records removes the ones where no host was found, and -n saves waiting for a bunch of reverse DNS lookups that won't work anyway.

\n

I like fping but there are other options like arping or nmap or scripting the use of ping.

\n" }, { "Id": "5895", "CreationDate": "2021-08-26T08:03:13.160", "Body": "

Please look at this sketch for ESP32. It does nothing but:

\n
    \n
  1. connects to WiFi
  2. \n
  3. connects to AWS MQTT
  4. \n
  5. subscribes to the /get/accepted topic
  6. \n
  7. every 5 s publish an empty message to the /get topic to retrieve the shadow file
  8. \n
\n
#include <SPIFFS.h>\n#include <WiFi.h>\n#include <WiFiClientSecure.h>\n#define ETH_CLK_MODE ETH_CLOCK_GPIO17_OUT\n#define ETH_PHY_POWER 12\n#include <ETH.h>\n#include <SSLClient.h>\n#include <PubSubClient.h>\n\n#define MQTT_PACKET_SIZE      4096\n#define ID "myid"\n\nWiFiClientSecure networkClient;\nPubSubClient pubsub;\n\nchar *rootCA;\nchar *privateKey;\nchar *certificate;\n\nunsigned long old_millis = 0;\n\nbool readFile(const char *path, char **buffer)\n{\n  File file = SPIFFS.open(path);\n\n  size_t size = file.size();\n  *buffer = (char *) malloc(size + 1);\n\n  char *p = *buffer;\n  while(file.available()) *p++ = file.read();\n  *p = '\\0';\n  return true;\n}\n\nvoid callback(char *topic, byte *payload, unsigned int length)\n{\n  Serial.print("Received ");\n  Serial.print(length);\n  Serial.print(" bytes @ ");\n  Serial.println(topic);\n  Serial.println((char *) payload);\n}\n\nvoid setup() \n{\n  Serial.begin(115200);\n  SPIFFS.begin();\n  WiFi.mode(WIFI_STA);\n  WiFi.begin("myssid", "mypassword");\n  while (WiFi.status() != WL_CONNECTED)\n  {\n    Serial.println(".");\n    delay(1000);\n  }\n\n  Serial.println(WiFi.localIP());\n\n  char filename[64];\n  readFile("/AmazonRootCA1.pem", &rootCA);\n  networkClient.setCACert(rootCA);\n\n  sprintf(filename, "/%s-cert.pem.crt", ID);\n  readFile(filename, &certificate);\n  networkClient.setCertificate(certificate);\n\n  sprintf(filename, "/%s-private.pem.key", ID);\n  readFile(filename, &privateKey);\n  networkClient.setPrivateKey(privateKey); \n\n  pubsub.setServer("myendpoint-ats.iot.us-east-2.amazonaws.com", 8883);\n  pubsub.setBufferSize(MQTT_PACKET_SIZE);\n  pubsub.setClient(networkClient);\n  pubsub.setCallback(callback);\n\n  old_millis = millis();\n}\n\nvoid loop() \n{\n  char topic[64];\n\n  if (pubsub.connected())\n  {\n    pubsub.loop();\n\n    if (millis() - old_millis > 5000)\n    {\n      sprintf(topic, "$aws/things/%s/shadow/name/module-1/get", ID);\n      pubsub.publish(topic, "");\n      Serial.print("Publish to ");\n      Serial.println(topic);\n      old_millis = millis();\n    }\n  }\n  else\n  {\n    Serial.print("State ");\n    Serial.println(pubsub.state());\n    if (pubsub.connect(ID))\n    {\n      Serial.println("MQTT connected");\n      sprintf(topic, "$aws/things/%s/shadow/name/module-1/get/accepted", ID);\n      if (pubsub.subscribe(topic)) Serial.print("Subscribed to: ");\n      else Serial.print("Error while subscribing to: ");\n      Serial.println(topic);\n    }\n    else\n    {\n      Serial.print("failed, rc=");\n      Serial.println(pubsub.state());\n      delay(1000);\n    }  \n  }\n}\n
\n

Here the output:

\n
.\n.\n.\n09:52:35.752 -> .\n09:52:36.745 -> .\n09:52:37.738 -> 192.168.1.41\n09:52:37.837 -> State -1\n09:52:40.716 -> MQTT connected\n09:52:40.716 -> Subscribed to: $aws/things/myid/shadow/name/module-1/get/accepted\n09:52:42.834 -> Publish to $aws/things/myid/shadow/name/module-1/get\n09:52:42.967 -> State -3\n09:52:45.945 -> MQTT connected\n09:52:45.945 -> Subscribed to: $aws/things/myid/shadow/name/module-1/get/accepted\n09:52:47.831 -> Publish to $aws/things/myid/shadow/name/module-1/get\n09:52:47.997 -> State -3\n09:52:50.975 -> MQTT connected\n09:52:50.975 -> Subscribed to: $aws/things/myid/shadow/name/module-1/get/accepted\n09:52:52.828 -> Publish to $aws/things/myid/shadow/name/module-1/get\n09:52:52.993 -> State -3\n
\n

Every time it publish the message the MQTT connection is lost. Why?\nWhat is my mistake?

\n", "Title": "MQTT disconnects from AWS when publishing a message", "Tags": "|mqtt|arduino|esp32|aws|", "Answer": "

Some quick possibilities:

\n
    \n
  1. You don't actually have a thing with the name of "myid".
  2. \n
  3. You haven't given this client the rights to the topics you're subscribing to or publishing on.
  4. \n
  5. There's not a named shadow with that name (though that should have got you an empty shadow I'd assume.). Try doing a put (after giving it rights to do so).
  6. \n
\n

Also, does the default shadow work?

\n" }, { "Id": "5913", "CreationDate": "2021-09-07T14:29:40.417", "Body": "

Is rsync suitable for IoT devices to move data from and to the cloud?

\n

Is there an equivalent for devices that have an unreliable connection?

\n", "Title": "Is there an equivalent to the rsync toolset/protocol for the IoT world", "Tags": "|cloud-computing|edge-computing|", "Answer": "

It is possible with rsync but it is 1-1 and needs you to have a server running in the cloud. If you use Amazon S3 instead, you can just use the AWS client with the sync option to put the data to S3 in the cloud. Then, you could use that to sync to another (1 or more) computers if you need it.
\nIf you dont want to use the AWS client on your device, you could use rclone and still have an S3 backend. rclone even supports Microsoft Azure blob storage and a whole lot of others too.
\nsyncthing.net provides similar services to rsync.
\nUnreliable connectivity can be mitigated by scheduling and if needed, triggering based on connectivity establishment.

\n" }, { "Id": "5926", "CreationDate": "2021-09-14T05:58:35.240", "Body": "

Why shall we consider having a LoRaWAN roaming agreement with another LoRaWAN network operator that has partly overlapping network coverage with our network? Our end devices are fixed (e.g.: water meters, temperature sensors, etc.) and our network is slightly larger than our partner's network. Does it make sense to set up roaming in such environment?\nWe operate a ThingPark LoRaWAN network server that can be easily connected to the ThingPark Exchange (TEX) roaming hub.

\n", "Title": "LoRaWAN roaming for overlapping networks", "Tags": "|lorawan|thingpark|", "Answer": "

LoRaWAN has a feature called "macro-diversity", which allows more than one GW to demodulate the same uplink frame (UL).

\n

Multiple gateways receiving the same UL increases network resiliency and performance (in the face of changing/challenging RF conditions).

\n

Thanks to the LoRaWAN Passive Roaming feature, set of multiple GWs demodulating the UL can belong to different networks: home network and one or more roaming partners. (Interestingly, a LoRaWAN device can be seen as at home and roaming into multiple visited networks simultaneously).

\n

This "network collaboration" creates the net effect of multiple networks contributing their GWs to act as "one densified network". Which, thanks to the LoRaWAN ADR algorithm, enables the end-device to use higher data rates with lower transmission power, and therefore reduces the battery consumption and interference. This is a win-win-win for the end-device - home network - visited networks.

\n

That's why it is highly advisable to use LoRaWAN roaming even for the networks with overlapped coverage.

\n" }, { "Id": "5927", "CreationDate": "2021-09-14T06:14:08.177", "Body": "

We are planning to connect several thousands of LoRaWAN connected water meters. The expected battery life time is 8+ years. Our concern is that meter suppliers are releasing new firmware version including bug fixes quite often (~ in every 6 month) so we cannot afford not updating the firmware of our devices for a period of 8 years. Is there any way to update device's firmware over a LoRaWAN network? We are operating a ThingPark network server.

\n", "Title": "Firmware update over a LoRaWAN network", "Tags": "|lorawan|thingpark|", "Answer": "

Yes, firmware updates are a big issue with LPWAN networks, because they do not have the downlink capacity to do device-per-device full firmware updates as you can do using e.g.: Bluetooth or WiFi.

\n

However, there is a good solution for that leveraging the fact that (1) often new Fw updates are only patches and (2) many devices require the same update.

\n

The ThingPark platform has a module that provides reliable multicast (RMC server), allowing you to broadcast a given file to multiple radio cells at once (the multicast group is created by flagging radio cells). you can combine it with another feature of this server to automatically compute a delta patch (ThigPark FUOTA, https://www.actility.com/iot-device-firmware-update-over-the-air/), i.e. compress the new FW by sending only updates. This is actually very complex because even small patches can change pointers all over in the code... but ThingPark FUOTA does a good job at this and will typically deflate your new Fw by ~85-90%.\nYou can create upgrade campaigns and follow progress in terms of % of devices upgraded. It automatically stops when a given success rate (e.g. 95%) is reached, and then you can restart campaigns for only the failed devices, or use other methods (e.g. on site visits and BLE) for the devices that are unreachable.

\n

The reliable multicast part is a LoRa Alliance standard (supported by ThingPark RMC), but the delta patch isn't, so you need support in your firmware for this, but the good news is that it is available for free if you use FUOTA and there is an optimized implementation for STM MCUs (for other ports you can ask Actility). You can find the documentation and pointer to the client side GIT here: https://www.actility.com/thingpark-documentation-portal/

\n" }, { "Id": "5928", "CreationDate": "2021-09-14T06:32:17.790", "Body": "

We are planning to connect several thousands of door opening and other home security sensors to a public LoRaWAN network country-wide, but our concern is, that in theory our public LoRaWAN network provider can decode the devices' messages. We may apply ABP activation (Activation By Personalization) so that we don't share the AppSKey with the public service provider, but it would result in a very complicated provisioning procedure. Is there any way to set up end-to-end security with ThingPark network server even in case of Over The Air Activation?

\n", "Title": "LoRaWAN end-to-end security with Over The Air Activation", "Tags": "|security|lorawan|thingpark|thingpark-activation|", "Answer": "

Actually even in OTA (using Join) the LoRaWAN spec separates the network session key (used by the network server to verify checksums and potentially de-ambiguate colliding short DevAddr addresses into their unique devEUIs), and the payload encrypting session key (AppSKey). It is one of its key innovations compared to other MAC layers like 802.15.4.

\n

The trick is to use a HSM (hardware security module) which will have access to the device's root key (AppKey), and derive the NwkSKey and AppSkey (joining process). The HSM will pass the NwkSkey to the network server in clear, and will pass the AppSkey encrypted (using a key encryption key previously shared with the application server). The network server therefore has no access to the payload information, is passes the payload as-is (encrypted by device at the source), together with the encrypted AppSkey, to the Application Server. The AS first decrypts the AppSkey, then the payload using AppSkey.

\n

A common misconception is that the HSM must be at the end customer premises. It is not the case, HSMs are designed to be secure in a hostile environment, i.e. the HSM in the cloud will not reveal its keys unless a pair of smart cards are presented to it, and any attempt to get into the hardware will destroy the keys. Therefore you can use hosted HSMs, as long as the smart cards are kept by trusted party.

\n

This secure framework is available on your ThingPark platform, either locally by taking the HSM option, or by using the separate ThingPark Activation platform (https://www.actility.com/iot-device-activation/), which acts as a secure join server and can be used in combination with any network server which supports the standard LoRaWAN back-end interface.

\n

You can use these solutions either with AppKeys provisioned by your device vendor into the ThingPark platform HSM, or using preconfigured Secure elements for your devices. The ThingPark HSM hosts secure key derivation sostware from leading SE vendors, which can access the AppKey directly from the SE serial number, so there is no need to provision any key in the HSM.

\n

Once the device root key is provisioned in ThingPark Activation Platform, it can never be accessed, you can only transfer it's ownership by means of a one-time activation token (provided, the first time, as part of the standard LoRaWAN QR code on recent LoRaWAN devices).

\n" }, { "Id": "5929", "CreationDate": "2021-09-14T06:49:48.427", "Body": "

We use Abeeway Compact Trackers connected via a ThingPark powered LoRaWAN network to track pallets. Our Application Server receives and decodes messages and stores the location coordinates in a DB so that the App Server can visualise the location history on request.

\n

What would be the benefit of using a cloud-based location solver (e.g.: ThingPark X Location Engine) with this solution? Would that increase the location accuracy? Wouldn't it just add unnecessary complications to the platform?

\n", "Title": "Location Solver with LoRaWAN trackers", "Tags": "|lorawan|thingpark|thingpark-location|", "Answer": "

If you use only GPS geolocation or a local BLE location solver, then a cloud service is not required. However if you want to benefit from assisted-GPS, or WiFi geolocation, then the ThingPark Location Engine (cloud based) is required.

\n

The main benefit of Assisted-GPS is that the GPS chip will not need to wait to have acquired the full satellite information (which requires to have a good signal as this information is transmitted using modulation that requires good SNR): you will get a much faster fix from cold-start situation with assisted GPS (typically <10s versus over a minute), this is particularly noticeable if your application requires indoor-outdoor tracking, e.g. people tracking who get i and out of buildings. Shorter fix time means lower power, by an order of magnitude. This fix also has lower accuracy than a full GPs fix, because GPs does local avaraging before sending the location via LoRaWAN, which is not possible with A-GPS (so you get a single fix with AGPS). Assisted GPS is the only option if you use the LR1110 chip which is not a full GPS. All Abeeway trackers have a full GPS chip, which is used first in AGPS mode to get a shorter fix time, and then switch to full GPS when all satellite data is available so you get better accuracy if/when needed.

\n

WiFi geolocation is very useful of course for indoor location, but also in urban area where it can be used typically in over 80% of the cases and also consumes a lot less power than GPS. Abeeway trackers can also use WiFi positioning, and this requires access to a cloud database (via ThingPark location).

\n

The bottomline is that cloud services from ThingPark Location make Abeeway trackers much lower power and increase battery life, and also makes GPS location faster and more robust. Abeeway has a patent called "LP-GPS" with further enhances AGPS by solving independently for time leveraging LPWAN synchronicity, further reducing the fix time.

\n" }, { "Id": "5934", "CreationDate": "2021-09-16T07:33:43.247", "Body": "

I have a RP4 and thinking about buying EC and PH sensors from Grove. I've learned that RP4 doesn't have an analog input so I'm thinking about connecting it through ADS1115 from Grove.\nI'm not sure about the connections, so I would like to know if for two sensor will need two ADS1115, or can I use one for both sensors in a same time?\nAlso if I need two, it is possible to connect one on the top of the next?

\n", "Title": "4-Channel 16-Bit ADC for Raspberry Pi (ADS1115) - Can I connect two analog sensors?", "Tags": "|raspberry-pi|sensors|", "Answer": "

The Seedstudio doc for the part says it has 4 channels so you should be able to connect up to 4 analogue sensors to each ADS1115 device.

\n

The ADS1115 devices can also be configured with 4 different i2c addresses which means you can connect up to 4 of them to a single i2c bus, which would mean up to 16 analogue sensors in total to a Pi (using the default single i2c bus).

\n" }, { "Id": "5956", "CreationDate": "2021-09-23T10:17:55.927", "Body": "

The LoRaWAN spec defines the LinkADRReq MAC command with a field (NbTrans) specifying the number of requested repeated transmissions of every frame. Actility claims that their ADRv3 algorythm uses that field to improve the packet error rate by requesting every frame to be transmitted multiple times instead of just increasing the Spreading Factor.

\n

I have several concerns about that approach:

\n\n", "Title": "The impact of LoRaWAN repeated transmissions", "Tags": "|lora|lorawan|thingpark|", "Answer": "

Hi you might be interested in this Olivier Seller from Semtech\nNetwork Capacity with LoRaWAN\nFrom thier modelling a NbTrans of 3 is optimal

\n

https://www.youtube.com/watch?v=VmYuItA6q4I&list=RDCMUCv85CXnZUXEKnlZpQapTAwQ&start_radio=1&rv=VmYuItA6q4I&t=450

\n" }, { "Id": "5964", "CreationDate": "2021-09-27T11:24:21.567", "Body": "

For the past few weeks I've been unable to use my Samsung SmartThings washer and dryer via the app. I use the notification feature to keep track of completed loads and sometimes use the app to start the machines or delay their runs.

\n

Ever since I switched to a Unifi Dream Machine to run the devices, I've been struggling to connect to them and use them remotely. Every time I run the set-up process, they will stay connected for a few hours at a time and then irreparably go offline. Only plugging them out and in again temporarily fixes the issue.

\n\n

Here's what I've tried:

\n\n

Nothing so far has worked. I plan to contact Samsung support soon but I wonder if anyone else has run into this before and whether they've been able to solve it. It's a minor issue as the devices still work manually but frustrating nonetheless as it's part of the reason why I bought these devices in particular.

\n

Here are the model names:

\n\n", "Title": "Samsung SmartThings washer and dryer unable to connect to the app", "Tags": "|networking|samsung-smartthings|mobile-applications|", "Answer": "

I'm not sure how, but the devices recently started working as expected. If others run into the same issue, I suggest updating your UDM (or whatever router you use) to the latest software version (I'm running 7.0.23) and making sure the washer/dryer is also up to date. Before the issue resolved itself, I took to unplugging the device and plugging it back in whenever it'd stop connecting, in order to force a soft reset.

\n

Resetting the network settings of the machine (see here for instructions) and then connecting it to SmartThings again may also help.

\n

These are my Wi-Fi network settings:

\n

\"enter

\n" }, { "Id": "5974", "CreationDate": "2021-10-06T09:16:34.073", "Body": "

For static objects, LoRaWAN has a very neat 'Adaptive Datarate (ADR)' feature that is key to the scalability of the protocol, it allows the network server to select the fastest uplink datarate compatible with target packet error rate (to conserve energy and minimize collisions), but also to control power and repeat (channel diversity). A lot of the value add of LoRaWAN network servers is here.

\n

However, it only works for static objects as this optimization depends on a stable radio channel.\nFor a moving object (or one with unstable radio channel like a parking sensor when car is parked), what are the best practices to select uplink datarates, power and repeat ? When the motion stops, what should be the initial datarate, power & repeat settings ?

\n", "Title": "What are the datarate and ADR best practices for moving objects", "Tags": "|lora|lorawan|thingpark|thingpark-location|", "Answer": "

LoRaWAN's ADR has a relative long convergence time, because the network server needs to collect data (SNR, packet error rate, etc.) from several uplink messages before it could calculate the ideal data rate and packet repetition parameters. For example: if a device sends UL messages in every 2 min and the NS needs 10 UL messages to calculate the packet error rate and the average SNR value, it would result in 20 min convergence time. If the NS calculates parameters based on a sliding window of the last 10 messages, this could be bit shorter, e.g.: ~10 min.

\n

If the link budget is changing on the scale of the convergence time, ADR won't bring any benefit and it mustn't be used.\nThe proper solution to this challenge depends on the actual use case.

\n

PARKING SENSORS
\nA possible solution to manage the changing link budget of parking sensors could be the following:

\n\n

MOVING SENSORS (e.g.: Tracking Sensors)
\nI case of moving devices, (e.g.: asset trackers) there is no obvious way to predict the actual link budget. In such environment the tracker must use static transmission parameters (e.g.: always the same data rate and the same num. of retransmissions) and the data rate should be so slow that UL messages can be decoded comming from all area the tracker could show up.

\n

IMPACT ON NETWORK CAPACITY
\nIt is very important to notice, that forcing hundreds of frequently talking tracker devices using always the same low data rate at a certain area (e.g.: in the area of a factory) will generate high congestion in the network and increase the packet error rate significantly.

\n\n

WHEN SHALL A MOVING OBJECT SWITCH BACK TO ADR?
\nIn case a moving sensor has a built-in accelerometer and can detect that motion has stopped, it may decide to use ADR again. Defining the "motion stopped" event can be done based on a time interval that the device should wait for without motion. When ADR starts again, the initial data rate should be the same that the device would start ADR with at boot time (even.g.:SF12).

\n" }, { "Id": "5986", "CreationDate": "2021-10-09T20:48:44.340", "Body": "

I have an amplifier that supports DLNA streaming. DLNA is not actually supported by the streaming services I use (amazon music, idagio).

\n

There are applications out there that are supposed to bridge the gap between the phone & DLNA supported device, such as BubbleUPnP & mconnect.

\n

However their support is also frustratingly limited. For example BubbleUPnP does support streaming from TIDAL & Qobuz but that's it.

\n

I am going to give up DLNA solution and move on to something like Chromecast or a dedicated streamer.

\n

However I just want to make sure, is there a way I can stream anything (in an adhoc way) from my phone to my amp?

\n", "Title": "Is there anyway to stream music from an unsupported streaming service to a DLNA enabled amp?", "Tags": "|streaming|", "Answer": "

Via bluetooth would be one way.
\nYou just need to buy a bluetooth adapter so that you can send the phone output to the amplifier via bluetooth (which I assume it currently does not have.)
\nThere are many on the market. Eg. https://www.amazon.com/dp/B086VZQG55/ref=cm_sw_em_r_mt_dp_VJGK0VRDQYVXA6GSHQR7?_encoding=UTF8&psc=1
\nNow, the issue will be that you'll have to have the phone app actually on and doing the work of downloading the song bytes and sending out bluetooth. You cant just 'cast' it to the amplifier like you could with chromecast.
\nIf you have to use chromecast with your beloved amp, you may have to find an HDMI chromecast and then something like this https://www.amazon.com/dp/B084RN22MW/ref=cm_sw_em_r_mt_dp_KA6H54059MFRAKE6PK4R?_encoding=UTF8&psc=1 to extract the audio out and feed it to your amplifier. (I haven't tried this.)
\nThere are of course, other amplifiers with chromecast built in...
\nWish you all the best !

\n" }, { "Id": "5989", "CreationDate": "2021-10-11T12:12:02.963", "Body": "

LoRaWAN is optimized for the constrained LoRa physical layer, however the gateways add their own overheads when relaying LoRaWAN frames to/from the network servers, and there is also trafic related to the operations and maintenance of the gateway.

\n

What kind of overheads can we expect per LoRaWAN packet (i.e. on top of the LoRaWAN over the air frame size), and also how much monthly trafic is related to maintenance of the gateway ? Is this tunable for constrained/expensive backhaul like satellite or cellular ?

\n", "Title": "LoRaWAN backhaul overheads over satellite or cellular", "Tags": "|lorawan|thingpark|", "Answer": "

Indeed, having a Gateway-LNS backhaul with optimized signaling overhead is key for OPEX saving in case of cellular backhaul. It is even essential for backhaul connectivity over satellite.

\n

Actility's ThingPark LRR (LRR stands for Long Range Relay, which is the name of the packet forwarder in the ThingPark platform) has been designed to optimize GW-NS backhaul traffic. Here are the main related features:

\n\n

For sake of minimizing the backhaul overhead for satellite connections, with optimized settings of the keep alive periodicity and other application-specific timers - we measured a monthly house-keeping traffic reduced to 50 MB/month (including secure IPSec tunneling overhead, but excluding LoRaWAN traffic).\nCombining the compact binary encoding + NwkID filtering to this optimized house-keeping overhead should yield less than 400MB of monthly volume, but the exact volume of course depends on the LoRaWAN traffic volume.

\n

Expect a 30x reduction compared to GW to LNS backhaul protocols using JSON or similar text format.

\n" }, { "Id": "5993", "CreationDate": "2021-10-13T15:01:19.900", "Body": "

I want to do a simple thing:

\n

Give away buttons to my friends that they can click randomly and it simply calls an API that I have implemented.

\n

I cannot wrap my head around what should I do and what is the cheapest option. So far I have found Flic button but it needs a hub and the price would be above $100 to do this simple thing.

\n

Any stand alone button that I can configure to do this simple thing?

\n", "Title": "How to call an API using a smart button?", "Tags": "|rest-api|", "Answer": "

There\u2019s no shortage of options, though they may have different advantages or drawbacks.

\n

The two main things to consider are how the device will connect to the Internet, and power.

\n

To reach the Internet, you have at least the following options:

\n\n

Until here I have mostly considered that you want to operate the device on battery with no wires. If wires are not an issue, you could simply plug an USB adapter into the device (and then an M5 Atom Lite is probably the cheapest option), or even use PoE.

\n" }, { "Id": "5996", "CreationDate": "2021-10-14T20:12:12.643", "Body": "

I have some development boards like the FiPy with Expansion Board, mainly to experiment with programming on the processor, but it would be interesting to connect some peripherals (by wire, not wireless) like cameras and sensors for motion, temperature, sound, etc. Is there a guide to generic peripherals that would help me understand what kind of wired connections are possible on these various boards? It's fairly easy to get info about a Raspberry Pi or an Arduino, but I'm not using those boards. I'm a total newbie to IoT, though I've been programming on ordinary desktops for many decades. How can I learn the basics of wiring generic peripherals to an IoT dev board?

\n", "Title": "Guide to connecting generic peripherals", "Tags": "|sensors|", "Answer": "

There are quite a number of possible interfaces for sensors and other peripherals. Some exist on nearly all platforms (possibly with a few differences), others are less common.

\n\n

Most MCUs, including the ESP32 on the FiPy, will include several UART, SPI, I2C, ADC and/or DAC ports, which can often be assigned to different pins. There are sometimes limits on which pins can do what, and of course a given pin can only be used for one function at a time. Different chips have different numbers of ports (for instance they may have 1, 2 or 3 UARTs), and have different numbers of available pins.

\n

On a board like the FiPy, many pins of the ESP32 are already used to connect the various peripherals on the board itself (LoRa/SigFox modem, cellular modem, LED, flash). The expansion board uses a few more (for the SD card, the PIC which acts as a serial/USB converter\u2026). The final choice is often quite limited.

\n

Many types of devices exist in many different forms with varying interfaces. Which ones to use will depend on the combination of peripherals you need and what interfaces they are available with.

\n

You\u2019ll often find devices using \u201cstandard\u201d interface ports like Grove (from Seeedstudio, also many M5stack devices and a few others), Qwiiic/Stemma QT (Sparkfun/Adafruit) or UEXT (Olimex). The first one at least is tricky, because it used the same physical interface, but may have different logical/electrical interfaces, either UART, SPI, I2C, digital or analog. I think the same applies to Qwiic/Stemma QT, though I\u2019m not sure. They may also have different voltages (3.3V or 5V), though many devices have the necessary logic to support both. UEXT carries multiple interfaces on the same port, but they use different pins.

\n" }, { "Id": "5998", "CreationDate": "2021-10-15T11:01:25.713", "Body": "

For smart-city or industrial campus type of projects, there are devices that leave the network for e.g. 2 days (worker leaving on a week-end), and then come back... but they might remain disconnected for a while until the next Re-join.

\n

The LoRa alliance has a best practice that mandates exponential back-off of joins, but it is still a bit vague on detection of isolation (which triggers Re-joins) and this question is to ask what are the best practices on the field, pros/cons of rejoining strategies, and what is effectively tested as part of LoRaWAN certification.

\n", "Title": "LoRaWAN device leaving a network and then coming back... what Join behavior to expect?", "Tags": "|lora|lorawan|thingpark|thingpark-activation|", "Answer": "

LoRaWAN end devices have to cope with 2 different types of network connectivity issues:

\n
    \n
  1. The device temporarily loses network connectivity.
    \nThis can happen in the following example cases:\n
      \n
    • There is no network coverage at the current location of a moving device
    • \n
    • There is a temporary network outage at the area of a fixed device. (e.g.: the LoRaWAN Gateway that is supposed to serve the device lost backhaul connectivity.)
    • \n
    \n
  2. \n
  3. The network server loses the session context (AppSkey, NwkSkey) of the device.
    \nThis can happen if\n
      \n
    • the network server had a non-recoverable db failure or a new server is taking over the role of an old one without providing session continuity.
    • \n
    • or the owner simply deleted the device from the network server and provisioned it again.
    • \n
    \n
  4. \n
\n

Lost network connectivity

\n

The device has 2 ways to detect that the network connectivity is temporarily lost:

\n\n

If the device detects (in any way) that the network is not available, it should first try sending uplink messages (e.g.: LinkCheckReq messages) with the lowest possible data rate (that will increase the link budget to the maximum), and if it still does not fix the connectivity problem, the device should exponentially back off consecutive uplink messages.\nIn practice, end devices are rather sending new JoinRequest messages than basic UL messages. The next chapter explains why.

\n

Lost session context

\n

In a LoRaWAN 1.0 network, if the network server lost the session context, the only way to reconnect a device is to make it send a new join request.
\nHowever, the device has no means to check if its network connectivity problem is due to temporary network outage or because of the lost session. (Both result in no downlink messages.)\nThe solution to this issue is that in case of any long-term connectivity problem (that cannot be fixed by lowering the data rate), the device starts sending new Join Request messages that will make the Network Server start a new session context and reset the frame counter too.

\n

This "lost session" problem is managed by LoRaWAN 1.1 networks in a much better way. LoRaWAN 1.1 introduces a new message type: Rejoin-Request. End devices are regularly sending Rejoin-Request Type 1 messages, that offers the Network Server an option to create a new device session by answering them by a Join Accept message. However, in case the Network Server still has a valid session, it will silently drop these messages. This solution prevents triggering unnecessary reset of valid device sessions in case of connectivity issues and increases the level of security.

\n" }, { "Id": "6016", "CreationDate": "2021-10-20T11:52:25.810", "Body": "

Can Traffic spy see the username and password of client send to broker when broker don't use TLS ?

\n", "Title": "Can traffic spy see username and password of MQTT client", "Tags": "|mqtt|security|tls|", "Answer": "

Yes, without TLS the CONNECT packet is sent in the clear so all of it's content can be seen.

\n

A full breakdown of how to decode the CONNECT packet can be found here

\n" }, { "Id": "6029", "CreationDate": "2021-10-26T00:57:43.140", "Body": "

Many smart appliance can only be controlled by the orinigal BLE remote. To control such appliance by your own smart home system, there need to be a way to replace the original BLE remote with something that is compatible with your smart home system.

\n

It is easy to replace IR remote and capture the command code on air, but BLE remote and the appliance seems to have pre-shared keys, how to link to the appliance with your own BLE node such as ESP32 node and how to know the command formats? Is there some existing generic way to do such things?

\n

Appendix:

\n

Examples:

\n

Example 1, turn on/off BLE TV or switch HDMI channel of BLE TV at some given condition (e.g. some specific person is dothing some specific thing).

\n

Example 2, some appliance (such as smart toilet) with builtin offline voice command recognition which is error prone, the idea is to use external camera and/or online speech recognition system and send BLE order to the appliance automatically on some condition.

\n

Note:

\n

Why not using appliance that is compatible with existing standard smart home system such as Alexa?

\n

Reason 1: many applicance doesn't support such way and may only support a system that is not compatible with your system.

\n

Reason 2: to protect privacy, one possibly better way is to build your own system with ESP32/fruit pies/PCs etc.

\n", "Title": "Is there a generic way to replace BLE remote with your own BLE controller?", "Tags": "|smart-home|", "Answer": "

There's some costly bluetooth sniffer that will allow you to do the capture of BLE including key exchanges while pairing and all the messages you might need afterward.

\n

You can find some cheaper BLE sniffer that will do part of what the bigger one can do, it will be able to capture BLE data and you will then be able to provide it with your keys allowing it to decipher the messages. In some cases they can also do the key pairing sniffing but they are way less effective.

\n

But even then, assuming you are able to get some keys and see what's exchanged, it will be hard for you to reverse engineer the protocol, unless it's standard HID codes shared on the proper BLE service. That's possible but very unlikely for "Smart" devices. I would expect for most of the product a non standard service with custom commands.

\n

So I would say that if you want to do a product that will interface with such devices, the best you can do is to contact the product manufacturer and ask for a datasheet so that you can manufacture your own product based on that.

\n

However if that's for a DIY project I'm not sure they will provide the datasheet to you unless you have a very good reason to get access to it or a promising probable future business opportunity. That can be for instance developing yet another IOT protocol to rule them all, or integrating their device to an existing IOT stack.

\n" }, { "Id": "6034", "CreationDate": "2021-10-29T14:44:44.577", "Body": "

I have provisioned my LoRaWAN class A device using OTAA on the ThingPark Community platform.\nNow, it doesn't seem to receive a Join Accept message from the network as answer to the Join Request that is sent. What could be the issue here?

\n", "Title": "LoRaWAN device not receiving Join Accept", "Tags": "|lorawan|thingpark|", "Answer": "

There are several reasons that may cause a failed join procedure. The typical problems are the following:\u00a0

\n
    \n
  1. The network server does not receive the Join Request because there are no LoRaWAN Gateways in the nighbourhood, or the gateway that is supposed to forward the message has no backhaul connection.
  2. \n
  3. The network server receives the Join Request but silently ignores it because the device is provisioned with incorrect DevEUI, AppEUI/JoinEUI and AppKey.
  4. \n
  5. The network server receives the Join Request and answers it by a Join Accept message, but the device does not receive it because the downlink message is sent with wrong TX parameters (Channel number, RX1/RX2 delay, Spreading Factor, etc.)
  6. \n
\n

In the first two cases, you won't see any messages in the\u00a0 Wireless Logger application however, in the 3rd case you will see both the Join Request and the Join Accept.

\n

If you can see the Join Accept message in Wireless Logger and your device keeps sending new Join Requests, it is most probably because the Join Accept is not sent with the TX parameters that are expected by your device. All TX parameters can be verified in Wireless Logger.

\n

The expected initial TX parameters may change device by device and that is why ThingPark introduced the concept of device profiles. When you provision your device it is important to select the right device profile that tells ThingPark what channel, datarate, RX delay, DL data rate offset should be used for sending the first downlink message which is obviously the Join Accept message. If your device does not have a pre-configured device profile on ThingPark yet, you can chose one of the Generic device profiles, that is compatible with your device.

\n" }, { "Id": "6047", "CreationDate": "2021-11-06T00:41:52.317", "Body": "

I have 2 Sonoff Mini R2's installed as a part of some remodeling work I'm doing on my house:

\n

\"sonoff

\n

What I love about these devices is that they allow you to turn any dumb toggle switch (what Sonoff refers to in their doc as an "SPDT switch"). I like this for 2 reasons:

\n
    \n
  1. My wife and I love vintage aesthetics and want to stay as true as we can to the original aesthetic of our home built in 1920, so using a regular toggle is truer to that aesthetic.
  2. \n
  3. It's easier to flip a toggle switch without looking at it (than a decora style switch) and you know from its tactile and audible feedback that it was successfully toggled.
  4. \n
\n

HOWEVER - I found out in my electrical inspection today that Sonoff devices are not UL rated/certified - and for me to pass the electrical inspection, I have to remove them.

\n

I know that there are smart switches out there that mimmic a "toggle-style" switch. They stick straight out and don't have the same tactile experience. Such a switch is what I will fall back to if I cannot find what I'm looking for.

\n

So is there a device like the Sonoff Mini R2 that:

\n\n

Note, the one drawback of the Sonoff Mini R2 (aside from its lack of UL certification) is that it doesn't report (/"push") when the SPDT switch is toggled, which forced me to implement a polling strategy to catch physical switch flip events and act on them for my automations.

\n

Incidentally, I can tell why these devices don't have a UL rating. I had a heck of a time connecting my 12 gauge wires to them without them popping out. The wires for my second device as threaded 12-gauge and that seemed nearly impossible to connect securely.

\n", "Title": "Any devices that do what the Sonoff Mini R2 does, but is UL rated?", "Tags": "|ac-power|sonoff|", "Answer": "

I recently dug into the Shelly switches and installed my first. Most are UL certified, the 1 in particular.

\n

The most "heavy duty" of the Shelly switches only support up to a 16 amp load, so they need to be protected by a 15A breaker, and, therefore, can be installed on 14AWG wiring. Unfortunately, they will not take 12AWG wire (as I learned the hard way). You can, however, pigtail from #12 to #14 in the box where you're installing it (so long as there's a 15A breaker in there; if it's a 20A breaker, you're out of luck either way). On the bright side, you don't need 20A to protect lighting circuits, as you'd be hard pressed (especially if you're using LEDs) to get anywhere close to a 20A load for your whole house full of lights, and most houses don't have all lighting on one circuit.

\n

I'm not sure what the Sonoff "DIY mode" is (as mentioned in you comment), but I have mine connected to Home Assistant, not connected to the internet and have never installed the Shelly app on my phone, so they're pretty much stand alone. Once powered up, it exposes its config via a self-hosted web page where you can then configure it to use your home's WiFi, turn on internet access (mine was off by default), etc.

\n

In the config, you can set the switch to "edge" mode which effectively turns the Shelly & the physical switch it's connected to into a 3-way pair. A toggle of either will toggle the lights. There are other modes, so if you want "up" = "on" on the physical switch, you can select the proper mode and you may have to toggle the physical switch to get the current light status and switch direction into sync before the switch will cause action on the lights it's attached to. I think this addresses your push/polling question.

\n
\n

No, I'm not a Shelly sales rep. I spent more than a week reading up on them and found a YouTube channel with a guy who did a really nice job of explaining at a 1st grade level the differences between the different Shelly models and (in his linked blog) showed exactly how to wire each of the different relays, because Shelly's own documentation is not very clear. I'm just a happy user.

\n
\n

Oh, one last thing. When you finalize your wiring, make sure that you don't have any bare copper sticking out of the device like that! That's just asking for a fire-starting arc to happen between the two hot wires. For the Shellys, you do not have to strip very much insulation off, and it looks like it's the same for the Sonoff. Snip the ends of the wires a bit to ensure the insulation butts up against the device housing.

\n" }, { "Id": "6054", "CreationDate": "2021-11-13T18:49:43.760", "Body": "

I'm having trouble connecting to my chromecast in any apps. Sometimes it works fine and sometimes it just won't work. I can see that it is connected to my WiFi network and I've read the very helpful answers at chromecast-doesnt-show-up. After many resets and hours of fighting with my chromecast, I am wondering, would buying a new chromecast help my problems? It is a few years old, but I can't think what parts would degrade over time in a chromecast.

\n", "Title": "Can a device like chromecast degrade over time?", "Tags": "|wifi|hardware|chromecast|", "Answer": "

That could be interference of signals in your wifi/zigbee network that builds up with time, when new devices added or some devices select the same channel.

\n

I also experienced problems connecting to Chromecast and ended up connecting the Chromecast to my router by wire using Cat5 networking cable. Moving to the wired connection improved reliability a lot, I don't even remember when I reset the Chromecast after that.

\n

There's a special power supply for Chromecast sold by Google that has RG-45 socket and provides not just power but ethernet via USB https://store.google.com/us/product/chromecast_ethernet_adapter_gen_2

\n" }, { "Id": "6068", "CreationDate": "2021-11-29T12:52:20.203", "Body": "

I am currently making an IoT App that I'm trying to connect to a Raspberry Pi using MQTT. I use the react_native_mqtt package. The problem I have is that it doesn't connect. What I'm trying to achieve is to receive data from the rasp and use that with react native. But the connection doesn't work.\nAny help is appreciated.

\n
\n

Error: Object { "errorCode": 7, "errorMessage": "AMQJS0007E\nSocket error:undefined.", "invocationContext": undefined, }

\n
\n
import init from 'react_native_mqtt'\nimport AsyncStorage from '@react-native-async-storage/async-storage'\n\n init({\n  size: 10000,\n  storageBackend: AsyncStorage,\n  defaultExpires: 1000 * 3600 * 24,\n  enableCache: true,\n  reconnect: true,\n  sync : {\n  }\n});\n\nconst host = '52.11.11.11'\nconst port = '8883'\nconst connectUrl = `mqtt://${host}:${port}`\nclientID = "clientID-" + parseInt(Math.random() * 100);\nexport default class TestScreen extends Component {\n\n  constructor(){\n    super();\n    this.onConnectionLost = this.onConnectionLost.bind(this)\n    this.onConnect = this.onConnect.bind(this)\n    const client = new Paho.MQTT.Client(host, Number(port), clientID);\n    client.onConnectionLost = this.onConnectionLost;\n    client.connect({ \n      onSuccess: this.onConnect,\n      useSSL: true,\n      userName: 'admin',\n      password: 'admin',\n      onFailure: (e) => {console.log("here is the error" , e); }\n\n    });\n\n    this.state = {\n      message: [''],\n      client,\n      messageToSend:'',\n      isConnected: false,\n    };\n  }\n  onConnect = () => {\n   // const { client } = this.state;\n    console.log("Connected!!!!");\n    //client.subscribe('hello/world');\n    this.setState({isConnected: true, error: ''})\n  };\n
\n

Nodejs.test:

\n
const mqtt = require('mqtt')\nconst host = '52.xx.xx.xx'\nconst port = '1883'\nconst clientId = `id_${Math.random().toString(16).slice(3)}`\nconst connectUrl = `mqtt://${host}:${port}`\nconst client = mqtt.connect(connectUrl, {\n  clientId,\n  clean: true,\n  connectTimeout: 4000,\n  username: 'xxx',\n  password: 'xxx',\n  reconnectPeriod: 1000,\n})\nmodule.exports = client;\n\nconst topic = 'EnergyMonitoring/energy'\n\nclient.on('connect', () => {\n  console.log('Connected')\n  client.subscribe([topic], () => {\n    console.log(`Subscribe to topic '${topic}'`)\n  })\n})\n\nclient.on('message', (topic, payload) => {\n  console.log('Received Message :', topic, payload.toString())\n})\n\napp.listen(1883, ()=>{\n  console.log("server is running")\n})\n
\n", "Title": "Make connection between react native & raspberry & MQTT", "Tags": "|mqtt|raspberry-pi|azure|paho|", "Answer": "

We solved this by edit the Mosquitto config file to either add a new listener port 8883 and to use the WebSocket protocol for that port.

\n

From Stack Overflow: Seting mosquitto broker to listen on two ports?

\n

It was a config problem. Thanks @hardillb for the helping and at @kalyanswaroop for the replies.

\n" }, { "Id": "6087", "CreationDate": "2021-12-21T22:58:42.313", "Body": "

I'm intending to create some (like 5-10) boxes with sensors and/or relays inside using ESP32's.

\n

For connection to a laptop and mobile phone I found a tutorial at\nESP32 WebSocket Server: Control Outputs (Arduino IDE) which uses an async webserver with sockets.

\n

However, I wonder if I would use multiple boxes, how to solve this. I thought of one of the following 'solutions"

\n

I want to have the boxes work together (e.g. to make a web page that I can set that e.g. when sensor X has value Y than on another box relay 1 should be on).

\n\n

I guess the second or third solution is best. Than my next question would be what is the best way to have the boxes communicate to each other. Can I for example have a websocket for the HTML page(s) on the laptop/mobile phone to the server and next to that use HTTP messages to communicate with all other boxes?

\n

I hope this questions is not too generic, I'm just a newbie to webservers/sockets.

\n

Note, I prefer to use my router as this is in the middle of the house and all my boxes will be within vicinity of it (more or less close).

\n", "Title": "How to connect several sensor boxes (ESP32) via WIFI/webserver/...?", "Tags": "|wifi|esp32|web-sockets|", "Answer": "

In a distributed system like this, the key questions are where the 'intelligence' for the system will reside, how it will communicate with the peripherals, and what interface you need to configure & monitor the system.

\n

Personally I think the Async Webserver plus Websocket technique is an overly complex solution to this problem; you could just run an ordinary non-async Web server on each ESP32, which takes in requests for data, and also accepts settings, for example if you request the Web page "status.csv" you get a comma-delimited list of values of your sensors, and "control.htm?output1=1&output2=0" would set the two outputs accordingly.

\n

When it comes to communication between the sensor boxes, it is generally not a good idea to distribute the intelligence amongst them, as programming and debugging 5 or more inter-communicating systems can be really hard work; it is much easier to have a single coordinator that controls everything. This would preferably be a disk-based system, so you can log all the data for diagnostic purposes.

\n

You will presumably need a way of displaying & changing the system status and nowadays that is generally a Web page, which could be served up by the management system. When running on a browser, it would use javascript AJAX calls to fetch the data from the peripherals, and generate a pretty display; I've used this technique in an ESP32 oscilloscope project

\n

The key advantage of having a Web-based system is that you can point your Web browser at any of the ESP32 units, and get it its raw data - this is very useful if something has gone wrong, and you want to check if it is due to a fault with one of the remote units, or a bug in the management software.

\n" }, { "Id": "6089", "CreationDate": "2021-12-22T19:19:19.753", "Body": "

How to detect the relative direction of two mobile devices?

\n

When a mobile device (i.e. a modern smartphone) is taken near to a second mobile device, it is possible to detect inter-device proximity distance using proximity sensors, UWB or even NFC and so on. Now, how to detect the directions of the first mobile device relative to the second mobile device? I mean, how to detect which of the directions among left, right, up, down, front & back the first device is located of the second device? Any paper work reference would be appreciated.

\n", "Title": "How to detect the relative direction of two mobile devices?", "Tags": "|sensors|mobile-applications|positioning|", "Answer": "

Direction detection is one of the selling points for UWB (Ultra Wide Band) and Bluetooth 5.0 when combined with multiple antenna arrays.

\n

This allows for AoA (Angle Of Arrival) to be calculated which will give you a relative direction to another device.

\n

But given the form factor of the a phone it only really allows for the antenna separation in a single plan which means angle direction is similarly constrained (left/right, not up/down).

\n" }, { "Id": "6092", "CreationDate": "2021-12-24T08:48:55.223", "Body": "

Does iPhone 11 device contain multiple UWB receivers?

\n

Angle of Arrival (AoA) is often required for many applications for UWB enabled devices. I wonder if modern UWB enabled devices i.e. iPhone 11 contain multiple UWB receivers to support AoA or not?

\n", "Title": "Does iPhone 11 device contain multiple UWB receivers?", "Tags": "|sensors|hardware|mobile-applications|uwb|", "Answer": "

These systems use multiple antenna, not multiple distinct radios to compute angle of arrival based on time of arrival at each antenna.

\n

It's probably worth noting that these radios tend to be SDR (software defined radios) which detect signals by doing frequency analysis on a whole chuck of spectrum, rather than running to a specific frequency. This makes ToA and AoA calcs relatively easier than trying to sync a high rate time source across multiple radios

\n" }, { "Id": "6094", "CreationDate": "2021-12-24T15:36:08.113", "Body": "

I bought a set of Nedis SmartLife Wi-Fi electric sockets. I connected them to my home Wi-Fi network and installed the control app on my phone. To my great surprise I can turn the plugs on even from very far away (different city). What makes this possible since I thought these devices are operating in my home Wi-Fi network? How can it be possible to control them from outside the range of this WiFi?

\n", "Title": "I\u2019m puzzled: Why can I control my Nedis Wi-Fi Smartplug even from a different city?", "Tags": "|smart-plugs|", "Answer": "

Both the smart plug and your phone connect to a sever somewhere on the Internet (\u201cin the cloud\u201d).

\n

When you use your phone to control the plug, it sends the command to the server, which then sends it to the plug. And vice-versa for the plug status.

\n" }, { "Id": "6097", "CreationDate": "2021-12-26T18:40:47.237", "Body": "

How can I access my IoT peripheral device (raspberry pi B3+) on my local network in a reliable way? I want to use WiFi not BLE. I want to protect against the device getting a different IP when it is powered off and on again.

\n

Is there a way to set a domain name for my device that only works locally? Otherwise, is setting a static IP the best option?

\n", "Title": "Named URLs on Local Network", "Tags": "|raspberry-pi|networking|wifi|ip-address|", "Answer": "

There are quite a few ways to achieve this.

\n

First, many DHCP servers will hand out the same IP to the same device, even if it has been switched off for a while (and most certainly if the current DHCP lease has not expired).

\n

You may want to make sure in the settings of your DHCP server (usually your router) that you have enough addresses in your pool, and possibly increase lease duration if it\u2019s something you can change.

\n

Of your current DHCP server does not cooperate, you may want to switch to another device (though it needs to be permanently on and connected to the network).

\n

Of course, you could do static DHCP assignments if the DHCP server allows it.

\n

Otherwise you can use static IP addresses (make sure you take those addresses out of the DHCP pool of course).

\n

All this would help having the same IP.

\n

Alternatively, you could use names which resolve to the dynamic IP. There are several methods to do that, depending on the combination of devices.

\n

One option is to use mDNS / DNS-SD. You can advertise a name like rpi.local from the Pi and then use that name from other computers. You could use the dns-sd tool on the Pi to do so. Some distributions may actually already advertise the host name that way.

\n

There are also alternatives based on uPnP/SSDP, Windows/SMB, and probably a few others.

\n" }, { "Id": "6106", "CreationDate": "2022-01-01T14:59:17.453", "Body": "

As per title I'm, looking a way to detect the press of a button powered at 220v ac through a gpio pin of the ESP32.

\n

The simplest solution found seems to be an octocoupler, but are there devices that support such a high voltage difference? What could be a connection scheme?

\n", "Title": "ESP32 or ESP8266 220v button as input", "Tags": "|esp8266|esp32|", "Answer": "

I would recommending just getting a small normally open (NO) relay with 230V AC coil and then run your ESP32 Vcc (3.3V) to one relay contact and back from the other contact to a GPIO pin.

\n

Alternatively check out AC main detector module for schematics on how to do it with an optocoupler.

\n" }, { "Id": "6125", "CreationDate": "2022-01-17T13:48:07.897", "Body": "

I just learned that Philips Hue now supports configurable power on behavior. The user can choose between off, on at default values or the last known setting.

\n

This sounds very good because my wife is less enthusiastic about home automation than me, so she would really like to be able to turn on any light, to bright white setting, by flipping the switch (twice if needed).

\n

Do any other bulbs also have this function?

\n", "Title": "Which smart bulbs support configurable power on bahaviour?", "Tags": "|smart-lights|", "Answer": "

Currently, in addition to Phillips hue, WiZ (Phillips wifi) & Lifx (wifi) have this feature. All tuya-devices that are custom-flashed with tasmota also have this feature. Finally, all zigbee bulbs turn on with 100% brightness every time. However, only hubitat allows customization of this (using a hub-based driver), all other hubs make it so that it MUST come on when turned on. Some regular tuya (not flashed) lights also have this feature, but it will be a gamble.

\n

WiZ is the most customizable overall, where it allows to change color, CT, and brightness on startup. LifX allows "power on" & "last state". Both of these bulbs have a dual-click function, where rapidly flicking the switch 2x gets you a different color/ct/brightness/on-or-off.

\n

I hope this helped! Feel free to ask me more questions so that you can make the right choice.

\n" }, { "Id": "6127", "CreationDate": "2022-01-18T09:10:35.987", "Body": "

I just bought an ESP32-C3-DevKitC-02 from Espressif, which features an onboard color LED. I'm a bit of a newbie in this space, so I thought I'd start by trying to get a simple "blink" program running. I've installed the Espressif VSCode extension, and I can successfully compile and load (flash) code. ...But the basic "blink" example doesn't blink the LED --- probably because the GPIO number is wrong. I did find a library for managing LEDs for this board, but it is generic and also accepts a GPIO as input, and running its sample code didn't work either. (Both the generic blink and the library samples specified GPIO 5 as the connection.)

\n

Does anybody know what the right value is, or am I missing something else?

\n", "Title": "How to use the onboard LED on the ESP32-C3-DevKitC-02", "Tags": "|esp32|", "Answer": "

As stated in the page you linked to :) and as stencilled directly on the board, it\u2019s connected to GPIO8.

\n

Note that it\u2019s an \u201caddressable\u201d RGB LED, so samples which just toggle an output won\u2019t work, it needs to use a library for addressable LEDs. I suppose this is a WS2812 or equivalent, also known as a Neopixel in the Adafruit world, but I haven\u2019t checked further.

\n" }, { "Id": "6131", "CreationDate": "2022-01-19T02:02:42.897", "Body": "

What is the best practice for certificate management on an IOT device (bare metal microcontroller with GSM modem that communicates to a secured MQTT broker)?

\n

Looking at Let's Encrypt certbot, they have 90 day expiry, then the certs should be updated. Should the certs also be sent via the existing secured MQTT connection when it is time for update? How if the device was turned off for a long time and when opened, it is already past the cert expiry? I learned that self-signed certificate is a bad idea however using it will greatly simplify the certificate management as the expiry date could be lengthen to many years.

\n

I am wondering how the wifi cameras handle their certificate management.

\n", "Title": "Certificate Management for IOT devices", "Tags": "|security|", "Answer": "

There are multiple pieces here.

\n
    \n
  1. You can make a HTTPS call and be sure of the server you're talking with, if you have the right valid root CA certificate. These are usually going to last 20 years.\nIf you dont have one, you may still be able to make the call, but cannot be sure of the server.
  2. \n
  3. You could make such HTTPS call to get a new device specific certificate. The server then needs some way to know its a valid device that is calling the HTTPS service to get a certificate. You could use your old certificate to generate the secret or use some other device specific secret that you put in it when you made the device. Use a hash based on it.\nThis is needed if it matters that it has to be your hardware. If not, this step can be relaxed a bit and a new certificate can be sent to the device without any device specific secret. Maybe its tied to a user account by generating a random number on the device, showing it to the user on the device UI and having the user put it into your web page. That way, the user is bringing in this device and you're trusting the user. The device then gets the certificate and works on behalf of the user from that point on.
  4. \n
\n" }, { "Id": "6138", "CreationDate": "2022-01-20T13:06:21.787", "Body": "

I have a mosquitto MQTT broker on a Raspberry Pi. I have some topics from a client in this format SolutionCommand/state as an array 110001.

\n

In other hand I can invoke some publish topics like SolutionControl/WP it can be on or off. So the MQTT client would subscribe to SolutionCommand/state first of all and publish to SolutionControl/WP after processing the value of what was received.\nBut, Here while processing how can I hold on the value of switch until receiving the current value from SolutionCommand/state this process takes 7s to received the value.

\n

The question is how can I hold on the state after publish for 7s?

\n

Here is my code bellow:

\n
const client = new Paho.MQTT.Client(process.REACT_APP_HOST, Number(process.REACT_APP_PORT), clientID);\n\n if(client){\n            client.connect({ \n              cleanSession : false, \n              onSuccess : .... \n        \n            });\n          \n          }\n          \n        }, [client])\n\n        const onConnect = () => {\n          client.subscribe(topic1,  { qos: 1 });\n        \n        };\n\n const onMessageArrived = (payload)=> {\n     \n          if(payload.payloadString[0] == 1)\n          {\n            setSolutionPump('ON')\n            setIsEnabled(true)\n          }\n          else if(payload.payloadString[0] == 0){\n            setSolutionPump('OFF')\n            setIsEnabled(false)\n          }\nconst toggleSwitch1 = () => {\n \n          const topic = "SolutionControl/WP";\n          if(!isEnabled1) {\n\n                setIsEnabled1(previousState => !previousState);\n                client.publish(topic, "on", 1);\n            \n          }\n          else {\n                \n                setIsEnabled1(previousState => !previousState);\n                client.publish(topic, "off", 1);\n             \n          } \n              \n    }\n
\n", "Title": "Determine state before publish", "Tags": "|mqtt|mosquitto|paho|publish-subscriber|actuators|", "Answer": "

The solution was is if the button is pressed I stoped receiving data until the new one comes

\n
onMessageArrived =  (payload)=> {\n            if(this.state.isPressed) {\n                this.setState({isPressed: false})\n                return;    \n            }\n
\n" }, { "Id": "6143", "CreationDate": "2022-01-24T10:15:35.500", "Body": "

I am thinking of buying an RFM69HCW FSK module from Adafruit.\nHowever I am confused as to which module to buy, the 433 MHz or the 900MHz version. Are both allowed in the UK?\nIf they are then why do they make two different versions? Does one transmit further than the other or does one use more power than the other?\nThere is also the option of a RFM69X, which again has the option of two bands, but this time operates on LoRa instead of FSK. I understand that this one can transmit further than the FSK.\nAny help is much appreciated.

\n", "Title": "Allowed RF frequencies in UK using RFM69", "Tags": "|smart-home|communication|lora|", "Answer": "

Different bands may or may not be allowed in different countries, or not with the same rules (power, duty cycle, etc.). Also the exact band may vary (for instance the \u201c900 MHz band\u201d is actually the 868 MHz band in the EU but the 915 MHz band in the US). Some devices may support both, others only one (the RFM69HCW supports both).

\n

Lower frequency bands usually have better propagation through obstacles, but regulatory limits (bandwidth, duty cycle, max power..) may be different. IIRC the 433 MHz band is quite limited (even more than the 868 MHz band, which is already crippled).

\n

LoRa can have significantly higher range, at the cost of very very very slow transmission in some cases, with very limited duty cycles, which means in those conditions they are suitable for sending a few bytes here and there, not much more.

\n

Note however that the largest factor for distance is unobstructed line of sight between the two antennas.

\n

Not knowing your needs (distance, indoors or outdoors, line of sight, amount of data, intervals between transmissions, power consumption\u2026) it is difficult to advise one device/tech/band over another.

\n" }, { "Id": "6148", "CreationDate": "2022-01-25T21:00:48.620", "Body": "

Can I turn on/off my electrical socket "Gosund SP1-C" from local network ? (via telnet, for example. Like it happened with Yeelight bulbs) I see that 80, 8080 and 5353 ports are opened but can't found any suggestion what command I can send to Gosand. (also, I failed with other tutorials, because this device don't use 'tuya app' and connect directly to Apple_cloud/homeKit/home_app and looks like can't be added/appear in 'tuya app'. And that why I can't get id/key that can be used for connecting to Gosund via cli)

\n", "Title": "Gosund (tuya) SP1-C managment via telnet", "Tags": "|web-sockets|lan|", "Answer": "

I found the way to manage my Gosund SP1-C outlet via Homa Assistant server and/or Home Assistant app.\nI need:

\n
    \n
  1. pairing with Gosund SP1-C outlet via Home App and iPhone

    \n
     - this add my Gosund SP1-C outlet to 'local' network + iCloud (or HomeKit server)\n
    \n
  2. \n
  3. remove Gosund SP1-C outlet from Apple Home App

    \n
     - this remove my Gosund SP1-C from iCloud (or HomeKit server) but it should stay paired to the 'local' network \n
    \n
  4. \n
  5. Now Home Assistant server able to found automatically Gosund SP1-C outlet and you just need to pairing with it via code that located on QR label that attached to the Gosund SP1-C outlet (or it's box).

    \n
     - * both of them: `Home Assistant` server and `Gosund SP1-C` outlet need to be in same local network\n\n - * `HomeKit Controller` integration need to be installed already on `Home Assistant` server `http://ip_adress_of_your_server:8123/config/integrations`\n
    \n
  6. \n
\n

After this steps you can able to control your HomeKit compatible-only Gosund SP1-C outlet via Android phone or browser via Home Assistant server web-page

\n

https://www.home-assistant.io/integrations/homekit_controller/

\n" }, { "Id": "6156", "CreationDate": "2022-01-31T12:43:46.647", "Body": "

Well, I have no experience with ESP32 UART, tried it and failed. In brief: I need to implement a data exchange between ESP32 and external module via UART. And at some point I need to read the data from the external module, but have a garbage instead.

\n

Details:\nI'm playing with KNX. I've made a simple setup with some devices. Also have made 2 completely identical devboards with NCN5120. NCN is configured to use UART 19200 8E1. One board is connected to the Rapsberry Pi, another to the ESP32 Wrover devkit (also tried bare Wrover with the same results). ESP32 UART is on the pins 21/22.\nOn Pi I use java with NRSerial, and everything is working fine. The code is quite large, but MVE just inits the KNX transceiver, assigns address and reads frames (i. e. telegrams). Java init code:

\n
int baudRate = 19200;\nNRSerialPort serial = new NRSerialPort(portName, baudRate);\nserial.setDataBits(8);\nserial.setParity(SerialPort.PARITY_EVEN);\nserial.setStopBits(SerialPort.STOPBITS_1);\nserial.connect();\n
\n

Then I init NCN, assign address and read frames. Everything is working fine, like, here's the readings when I read value from some of my KNX devices on the bus:

\n
Using port /dev/ttyAMA0\nSent reset, got response 0x03\nSent address, got response 0x21\nBC 11 07 00 06 E1 00 00 B2\n
\n

But with ESP32 I have some garbage input from the NCN. Init and address assignement both are working correctly, but bus readings are completely different for the same frame:

\n
SDK version 4.3.2\ninit written ok\ninit response read ok\nAssign address written ok\nAssign address response read ok\n\n\n47 BC 11 07 00 46 F5 54 14 B2 57\n
\n

I tried to swap my KNX boards, but each of them is working fine with Pi, and does not work with ESP32. Probably there's an error in my code. Please help. MVE for ESP32 follows:

\n
{\n    ets_printf("\\nStarting\\nSDK version %s\\n", esp_get_idf_version());\n\n    uart_config_t uartConfig = {\n        .baud_rate = 19200,\n        .data_bits = UART_DATA_8_BITS,\n        .parity = UART_PARITY_EVEN,\n        .stop_bits = UART_STOP_BITS_1,\n        .flow_ctrl = UART_HW_FLOWCTRL_DISABLE,\n        .rx_flow_ctrl_thresh = 122,// Tried also 0\n        .source_clk = UART_SCLK_APB // Tried also UART_SCLK_REF_TICK\n                      };\n\n    int uartPortNumber = 1; // Tried also 2\n    int pinTx = 21;\n    int pinRx = 22;\n\n    uart_driver_install(uartPortNumber, 1024, 0, 0, NULL, 0);\n    uart_param_config(uartPortNumber, &uartConfig);\n    uart_set_pin(uartPortNumber, pinTx, pinRx, UART_PIN_NO_CHANGE, UART_PIN_NO_CHANGE);\n    uart_set_baudrate(uartPortNumber, KNX_UART_SPEED);\n    uart_set_word_length(uartPortNumber, UART_DATA_8_BITS);\n    uart_set_parity(uartPortNumber, UART_PARITY_EVEN);\n    uart_set_stop_bits(uartPortNumber, UART_STOP_BITS_1);\n    uart_set_hw_flow_ctrl(uartPortNumber, UART_HW_FLOWCTRL_DISABLE, 0);\n    uart_set_mode(uartPortNumber, UART_MODE_UART);\n\n    uint8_t byte;\n\n    byte = 1;\n    if (uart_write_bytes(uartPortNumber, &byte, 1) != 1)\n    {\n        ets_printf("Init writte error\\n");\n    }\n    else\n    {\n        ets_printf("init written ok\\n");\n    }\n    if (uart_read_bytes(uartPortNumber, &byte, 1, 10000 / portTICK_RATE_MS) != 1 || byte != 3)\n    {\n        ets_printf("Init response read error\\n");\n    }\n    else\n    {\n        ets_printf("init response read ok\\n");\n    }\n\n    uint8_t address[4] = {0xf1, 0x41, 0x02, 0x00};\n    if (uart_write_bytes(uartPortNumber, &address, 4) != 4)\n    {\n        ets_printf("Assign address writte error\\n");\n    }\n    else\n    {\n        ets_printf("Assign address written ok\\n");\n    }\n    if (uart_read_bytes(uartPortNumber, &byte, 1, 10000 / portTICK_RATE_MS) != 1 || byte != 0x21)\n    {\n        ets_printf("Assign address response  read error\\n");\n    }\n    else\n    {\n        ets_printf("Assign address response read ok\\n");\n    }\n\n    while (true)\n    {\n\n        int res;\n        res = uart_read_bytes(uartPortNumber, (void *)&byte, 1, 10000 / portTICK_RATE_MS);\n\n        if (res == 1)\n        {\n            ets_printf("%2.2X ", byte);\n        }\n        else\n        {\n            ets_printf("\\n");\n        }\n\n        vTaskDelay(1);\n    }\n}\n
\n

Thank you!

\n", "Title": "ESP32 UART reads garbage", "Tags": "|esp32|communication|", "Answer": "

Configuring a UART for 8 bits plus even parity is a red flag. This configuration has been obsolete for decades. A lot of current hardware doesn't actually support it. Set to 8-bit no parity.

\n" }, { "Id": "6162", "CreationDate": "2022-02-04T00:16:44.773", "Body": "

I've just used tuya-convert to flash a Soundance C198 powerstrip. This is my first time doing anything like that, and I'm a little lost. The web interface gives me a list of gpios 0-10, and 12-17. These can be configured as LEDs 1-4 (either with an led or led_i setting).

\n

At some point, I've set all of those to LED, and through the console I've ran the LedPower1 1 and LedPower 1 commands. These do not seem to produce any errors, but they also don't turn on any LEDs (2 on the device).

\n

Am I supposed to be using some other command to turn these on? Can I turn on more than one gpio at a time (so I can do a binary search)?

\n

The list of tasmota supported devices does not include this device, nor goes Google turn up much other than an Amazon UK page to buy more.

\n", "Title": "What tasmota commands (web console) will turn on an indicator LED that I suspect is at gpio3?", "Tags": "|tasmota|", "Answer": "

The problem seems to stem from using the Configure Template menu option in Tasmota. While it provides a nice user interface that displays all of the options, clicking save after making changes doesn't always result in those changes taking effect (either immediately or sometimes at all).

\n

It is much better to use the Configure Other menu option, and input the template manually, as a json string:

\n
{"NAME":"ID Relays","GPIO":[0,0,0,0,224,225,0,0,226,227,228,229,230,0],"FLAG":0,"BASE":18}\n
\n

Everything should be put in as a relay at first (because only relays will have the clickable buttons when the page reloads). Saving from the Configure Other page does seem to always activate the changes, though some configurations seem to be screwy enough that not much will work.

\n

The first json string to use should be:

\n
{"NAME":"ID Relays","GPIO":[224,225,226,227,228,229,0,0,230,231,0,0,0,0],"FLAG":0,"BASE":18}\n
\n

Toggle all 8 of the buttons, and see what happens. At least for my device, an LED will toggle as if a relay (but if you set it for the code number for LEDs, you get no button to test/debug with, so wait to do that for now).

\n

Some of your buttons will show as "off" or 0, when the device is on, and vice versa. These will have to be configured as relay_i, led_i, etc (the _i is for "inverted). You can wait to do that for now. When you find a device, mark down which one it corresponds to.

\n

The 7th and 8th positions should be left as zero, as well as the last. So if you have to hunt for more devices, then use this json string next:

\n
{"NAME":"ID Relays","GPIO":[0,0,0,0,0,0,0,0,0,0,224,225,226,0],"FLAG":0,"BASE":18}\n
\n

Once saved and the page reloads, test all 3 buttons as before. Mark down which are inverted, what device they activate.

\n

When you have all the devices identified, use the Configure Template page to adjust each to inverted, or to LEDs, or whatever. All relays have to be sequential, so that the top one is 1, the next (no matter how many gpios are skipped) is 2, and so on. Same for LEDs. Inverted relays and LEDs still share the same sequence. Once saved, it might be necessary to go into Configure Other and save it there to activate it.

\n

For this level of device discovery, console commands seem unnecessary. Also, Tasmota seems to want to take over the first LED, such that you may not get to use it with console commands anyway... seems to automatically make it into a power status indicator.

\n

In summary:

\n
    \n
  1. Only relays get virtual buttons on the main page/menu.
  2. \n
  3. The Configure Template page doesn't always activate changes you make.
  4. \n
  5. Tasmota is finicky about making similar devices sequential.
  6. \n
\n" }, { "Id": "6180", "CreationDate": "2022-02-21T01:20:36.597", "Body": "

I got the Dynamic Security of mosquitto mostly working. However, I'm not sure how to use the listClients command through the json approach, as explained here:

\n

https://github.com/eclipse/mosquitto/blob/master/plugins/dynamic-security/README.md#list-clients

\n

For example, this command works perfectly for me and it lists all the users stored in my /var/lib/mosquitto/dynamic-security.json file:

\n
mosquitto_ctrl -u steve -P Pass1234 dynsec listClients\n
\n

However, when I use the JSON approach like this:

\n
mosquitto_pub -u steve -P Pass1234 -t '$CONTROL/dynamic-security/v1' -m '{"commands":[{"command": "listClients","verbose":false,"count": -1,"offset": 0}]}';\n
\n

The result is no output at all.

\n

How do I get the list of clients through the JSON approach as indicated in the README.md file above?

\n", "Title": "How to list mosquitto clients using dynamic security feature?", "Tags": "|mosquitto|", "Answer": "

Assume the answer to the question in the comments is "no"

\n

You need to also run:

\n
mosquitto_sub -u steve -P Pass1234 -v -t '$CONTROL/#'\n
\n

To subscribe to the correct response topic.

\n

The mosquitto_pub command will only ever publish messages, it will subscribe to any topics and print out messages.

\n" }, { "Id": "6220", "CreationDate": "2022-03-16T21:35:44.230", "Body": "

I am trying to install micropython on my esp32 cam.

\n

I tried 3 differents ways to do it.

\n

First one command and putty

\n

I installed python and esptool with pip install esptool.

\n

Then I looked wich COM was my esp32 (In my case COM5), then I ereased the flash with this command

\n
esptool.py --chip esp32 --port COM5 erase_flash\n
\n

Here is the result, it looks fine to me.

\n
esptool.py v3.2\nSerial port COM5\nConnecting....\nChip is ESP32-D0WDQ6 (revision 1)\nFeatures: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None\nCrystal is 40MHz\nMAC: 58:bf:25:83:bd:64\nUploading stub...\nRunning stub...\nStub running...\nErasing flash (this may take a while)...\nChip erase completed successfully in 14.3s\nHard resetting via RTS pin...\n
\n

After that I flashed the firmeware that I got from github lemariva/micropython-camera-driver, I also tried with the one from official website of micropython

\n
esptool.py --chip esp32 --port COM5 --baud 460800 write_flash -z 0x1000 micropython_cmake_9fef1c0bd_esp32_idf4.x_ble_camera.bin\n
\n

Each time I got a result like this:

\n
0bd_esp32_idf4.x_ble_camera.bin\nesptool.py v3.2\nSerial port COM5\nConnecting....\nChip is ESP32-D0WDQ6 (revision 1)\nFeatures: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None\nCrystal is 40MHz\nMAC: 58:bf:25:83:bd:64\nUploading stub...\nRunning stub...\nStub running...\nChanging baud rate to 460800\nChanged.\nConfiguring flash size...\nFlash will be erased from 0x00001000 to 0x00181fff...\nCompressed 1575808 bytes to 1010481...\nWrote 1575808 bytes (1010481 compressed) at 0x00001000 in 23.8 seconds (effective 529.5 kbit/s)...\nHash of data verified.\n\nLeaving...\nHard resetting via RTS pin...\n
\n

After that I tried to connect to it with putty

\n

\"enter

\n

I tried to unplug the esp32, press the reset button, hit CTRL + enter / CTRL + D.\nBut each times i got a black window and I'm unable to write something

\n

\"enter

\n

Using Thony

\n

I tried to flash the firmeware with Thonny Click on bottom Right "Micropython (ESP32)" > Configure Interpreter > Install or update firmware

\n

But after the installation I always have a error and cant type python

\n

\"enter

\n

uPyCraft

\n

I also tried to do it with uPyCraft, when I select the COM port, it asks me to flash the firmware, it seems to works, but when i select again the port it ask again.

\n

Problem

\n

I can't find a way to use micro python with my esp32 cam, every time it seems to flash the firmware but can't connect to the ESP32 after that.

\n

My esp card is:\n\"enter

\n

And the usb converter is a CH340 serial converter

\n

EDIT 20/03/2022

\n

I found out that my problem is not about the firmware, but about the communication between my computer (Windows 10) and the esp 32 card.

\n

I tried to install the firmware from a other computer (Macbook pro) and this time I was able to execute python on the card.

\n

Later without any modifications I tried again to just plug my card on my original computer and connect to it, but still unable to execute python.

\n

So I think that it could be a dirver problem or anything else, but related to my computer. For now I still don't know what I could try.

\n", "Title": "ESP32 CAM unable to use micropython and connect to the esp", "Tags": "|esp32|micropython|", "Answer": "

Hi Helmut Hissen and Gregory Boutte, thanks a lot for sharing your experiences. I am using a cheap ESP32-CAM (ESP32-S Version) together with a ch341 UART converter on a Linux Mint machine and had to do things slightly different.\nFirst I tried to follow your suggestion, modifying the configuration.ini file, restarting Thonny, and flashing the firmware with Thonny. But Thonny removed the extra lines from the configuration file and Micropython would not run. Although the image was flashed perfectly, which i checked with esptool.py verify_flash --diff yes 0x1000 firmware.bin I had no success. So I did the following.

\n
    \n
  1. First I flashed this firmware built by shariltumin not using Thonny but using esptool.py directly:
  2. \n
\n
esptool.py --chip esp32 --port /dev/ttyUSB0 write_flash -z 0x1000 WIFI+TLS/firmware.bin\n
\n
    \n
  1. Then I added the following lines to my Thonny configuration.ini file:
  2. \n
\n
[ESP32]\ndtr = False\nrts = False\n
\n
    \n
  1. After this I was able to run Micropython using Thonny on my ESP32-CAM an run some commands.\n\"Micropython
  2. \n
\n" }, { "Id": "6226", "CreationDate": "2022-03-19T14:23:16.910", "Body": "

I've been using the Tuya app on iPhone for a couple of months, first with a Devola heater which broke down in a month so I sent it back, now with a Mylek.
\nI didn't realise until I got the Mylek that the UI for Tuya is driven by the device manufacturer not Tuya itself, so the UI for the Mylek is totally different to the Devola [which worked perfectly].

\n

The Mylek shows as online to the app, but offline to the Scene feature in Tuya, seemingly no matter what I do.

\n

The Mylek works fine if I'm addressing it directly from its own GUI.\nThe Mylek weekly timer (the [7] icon in the pic below), however, is an atrocious implementation, making me click one box per hour, for all 7 days, to set periods at which it should heat & to what temperature. It does actually work, though.
\nI decided instead to try the Scenes in Tuya itself. Set an on & off time, set a temperature [the rest I can figure out once I get this bit working].

\n

On and working, from the Mylek GUI\u2026

\n

\"enter

\n

This communication works 2-way, if I change anything on the device's panel, it reflects here [so long as the child lock is off].

\n

However, if I try to interact with the device in even the simplest way from the scheduler, it fails as 'offline'

\n

\"enter \"enter

\n

\"enter

\n

The device is quite clearly not offline. As far as I'm aware I have no firewalling at all inside this network subnet, only to the outside world.

\n

Any ideas as to where to look next? Or anyone aware of this as a 'known issue'?
\nTuya & Mylek's online help structures are both pretty obtuse & mainly written in Chinglish.

\n
\n

Since first writing this I have discovered, by accident, how to make 'stripes' for on/off periods in Mylek's own interface rather than having to tap in every single hour - something which had eluded me for days. I'd still prefer to be able to setup these as events instead, as they would provide finer granularity.

\n

\"enter

\n", "Title": "Mylek heater on Tuya app - online/offline", "Tags": "|smart-home|networking|", "Answer": "

I have at least partially figured this out\u2026

\n

I noticed that the heater would appear offline any time my phone wasn't connected to my home Wi-Fi.

\n

That made me suspect is was going to be a firewall issue. I have a corporate router [Sophos UTM] without UPnP, so often things that would appear to be fine actually get blocked. I'd already checked the firewall log entries & not seen anything obvious that was hitting it.

\n

Long story short, I have now punched a hole straight through it in both directions, allowing any & all comms to & from that one MAC address.

\n

This does indicate that the local 'direct' setting from the Mylek plugin for the app runs directly over the local subnet, but that any Tuya-specific data, such as scheduling, goes via Tuya.com & then has to be able to punch through from the outside world. This makes me think the Mylek device is not opening its own keep-alive from the inside, enabling a return message to come back down the same 'open line'.

\n

This is not an ideal long-term solution & I shall add to this answer when I discover exactly what ports & services I need to hone this down to, so I'm not effectively running it in a DMZ.

\n" }, { "Id": "6238", "CreationDate": "2022-04-05T09:11:54.927", "Body": "

I bought this shield and trying to receive LoraWAN packets in my surrounding.

\n

ESP32 Heltec Lora

\n

Within their SDK there is examples of receiving data for Lora, joining LoraWAN, however i am interested in seeing data in the air for LoraWAN.

\n

Is this chip enough to be able to receive LoraWAN packets in the air?\nI have it running hopping frequencies since a week, and haven't seen any packet at all.

\n

The code i am using for this is a slight modification from their ping-pong example with channel hopping in place.

\n
#include <ESP32_LoRaWAN.h>\n#include "Arduino.h"\n\n\n#define RF_FREQUENCY                                868000000 // Hz\n\n#define TX_OUTPUT_POWER                             15        // dBm\n\n#define LORA_BANDWIDTH                              0         // [0: 125 kHz,\n//  1: 250 kHz,\n//  2: 500 kHz,\n//  3: Reserved]\n#define LORA_SPREADING_FACTOR                       12         // [SF7..SF12]\n#define LORA_CODINGRATE                             1         // [1: 4/5,\n//  2: 4/6,\n//  3: 4/7,\n//  4: 4/8]\n#define LORA_PREAMBLE_LENGTH                        8         // Same for Tx and Rx\n#define LORA_SYMBOL_TIMEOUT                         0         // Symbols\n#define LORA_FIX_LENGTH_PAYLOAD_ON                  false\n#define LORA_IQ_INVERSION_ON                        true\n#define LORA_IQ_INVERSION_OFF                       false\n#define LORA_FHSS_ENABLED                           false\n#define LORA_CRC_ENABLED                            false\n#define LORA_NB_SYMB_HOP                            0\n#define RX_TIMEOUT_VALUE                            1000\n#define BUFFER_SIZE                                 1024 // Define the payload size here\nstd::vector<uint32_t> vFrequency;\n\nchar txpacket[BUFFER_SIZE];\nchar rxpacket[BUFFER_SIZE];\nstatic RadioEvents_t RadioEvents;\nvoid OnTxDone( void );\nvoid OnTxTimeout( void );\nvoid OnRxDone( uint8_t *payload, uint16_t size, int16_t rssi, int8_t snr );\n\ntypedef enum\n{\n  STATUS_LOWPOWER,\n  STATUS_RX,\n  STATUS_TX\n} States_t;\n\nint32_t iTimer;\nint iFreq;\nint16_t txNumber;\nStates_t state;\nbool sleepMode = false;\nint16_t Rssi, rxSize;\n\nuint32_t  license[4] = {a, b, c, d};\n\n// Add your initialization code here\nvoid setup()\n{\n  Serial.begin(115200);\n  while (!Serial);\n\n  vFrequency.push_back(868000000);\n  vFrequency.push_back(868100000);\n  vFrequency.push_back(868300000);\n  vFrequency.push_back(868500000);\n  vFrequency.push_back(867100000);\n  vFrequency.push_back(867300000);\n  vFrequency.push_back(867500000);\n  vFrequency.push_back(867700000);\n  vFrequency.push_back(867900000);\n  vFrequency.push_back(868800000);\n  vFrequency.push_back(869525000);\n\n\n  SPI.begin(SCK, MISO, MOSI, SS);\n  Mcu.init(SS, RST_LoRa, DIO0, DIO1, license);\n\n  iFreq = 0;\n  txNumber = 0;\n  Rssi = 0;\n\n  RadioEvents.TxDone = OnTxDone;\n  RadioEvents.TxTimeout = OnTxTimeout;\n  RadioEvents.RxDone = OnRxDone;\n\n  Radio.Init( &RadioEvents );\n  Radio.SetChannel( RF_FREQUENCY );\n  // set radio parameter\n  Radio.SetTxConfig( MODEM_LORA, TX_OUTPUT_POWER, 0, LORA_BANDWIDTH,\n                     LORA_SPREADING_FACTOR, LORA_CODINGRATE,\n                     LORA_PREAMBLE_LENGTH, LORA_FIX_LENGTH_PAYLOAD_ON,\n                     LORA_CRC_ENABLED, LORA_FHSS_ENABLED, LORA_NB_SYMB_HOP,LORA_IQ_INVERSION_OFF, 2000000 );\n\n  Radio.SetRxConfig( MODEM_LORA, LORA_BANDWIDTH, LORA_SPREADING_FACTOR,\n                     LORA_CODINGRATE, 0, LORA_PREAMBLE_LENGTH,\n                     LORA_SYMBOL_TIMEOUT, LORA_FIX_LENGTH_PAYLOAD_ON, \n                     0, LORA_CRC_ENABLED, LORA_FHSS_ENABLED, LORA_NB_SYMB_HOP,LORA_IQ_INVERSION_ON, true );\n  state = STATUS_TX;\n}\n\n\nvoid loop()\n{\n  if (millis() - iTimer > 1000)\n  {\n\n    iFreq++;\n    if (iFreq > vFrequency.size() - 1)iFreq = 0;\n    Radio.SetChannel( vFrequency[iFreq] );\n    iTimer = millis();\n  }\n  switch (state)\n  {\n    case STATUS_TX:\n      delay(1000);\n      txNumber++;\n\n      Serial.printf("Resending packet , length %d, freq %li\\r\\n", rxSize, vFrequency[iFreq]);\n\n      Radio.Send( (uint8_t *)rxpacket, rxSize );\n      state = STATUS_LOWPOWER;\n      break;\n    case STATUS_RX:\n      Serial.println("into RX mode");\n      Radio.Rx( 0 );\n      state = STATUS_LOWPOWER;\n      break;\n    case STATUS_LOWPOWER:\n      LoRaWAN.sleep(CLASS_C, 0);\n      break;\n    default:\n      break;\n  }\n}\n\nvoid OnTxDone( void )\n{\n  Serial.print("TX done......");\n  state = STATUS_RX;\n}\n\nvoid OnTxTimeout( void )\n{\n  Radio.Sleep( );\n  Serial.print("TX Timeout......");\n  state = STATUS_TX;\n}\nvoid OnRxDone( uint8_t *payload, uint16_t size, int16_t rssi, int8_t snr )\n{\n  Rssi = rssi;\n  rxSize = size;\n  memcpy(rxpacket, payload, size );\n  //rxpacket[size]='\\0';\n  Radio.Sleep( );\n\n  Serial.printf("Packet Received: %i\\r\\n", rxSize);\n\n  for (int x = 0; x < rxSize; x++)\n  {\n    Serial.printf("%02X ", rxpacket[x]);\n  }\n  Serial.println("");\n  iTimer += 1000; // stay on this channel\n  state = STATUS_TX;\n}\n
\n", "Title": "Heltec WiFi Lora32 - Can i received LoraWAN packets?", "Tags": "|esp32|lora|lorawan|", "Answer": "

First of all, I'm pretty sure you are aware that LoRaWAN traffic is encrypted, so you wouldn't be able to actually read much of anything of the payloads.

\n

Depending on where you are and your setup (indoors/outdoors, antenna...) it could be quite easy to not see any LoRaWAN traffic at all. LoRaWAN can travel dozens of km with line of sight, but it can stop very suddenly indoors depending on the construction. And while some areas are quite busy with lots of devices, others are probably extremely quiet.

\n

Even if you are in range of some other devices, most are really quiet (that's the whole idea behind battery-powered LoRaWAN devices), and would send no more than one packet a day, possibly even less. Remember that contrary to WiFi or some BLE devices for instance, there are no constant broadcast/advertisements/beacons. Devices transmit only when they actually have something to transmit.

\n

You would also need to be listening with the right combination of frequency and data rate (SF+BW), which reduces your chances quite a bit.

\n

Your code only changes the frequency (and listens on frequencies which are not used for real channels, so you're losing some listening time), but does not change the data rate, so you're missing a good chunk of possible traffic.

\n

You would also have to check if the rest of the parameters (coding rate, IQ..) match those for LoRa.

\n

In addition, it seems you're switching frequency every second or so. At SF12, many packets will take longer than a second to transmit, so you're missing those as well.

\n

I'm not familiar with the Heltec libraries/APIs, but:

\n\n" }, { "Id": "6259", "CreationDate": "2022-04-26T15:34:11.447", "Body": "

I've managed to connect to my Mosquitto broker of Paho MQTT JS client. However, the client disconnects inmediately after completing the connection. Before it worked now I'm testing it doesn't work.

\n

This is my code for the main functions involved:\nconstructor() {

\n
    super();\n    this.onConnectionLost = this.onConnectionLost.bind(this)\n    this.onConnect = this.onConnect.bind(this)\n    this.onMessageArrived = this.onMessageArrived.bind(this)\n\n    const client = new Paho.MQTT.Client(process.REACT_APP_HOST, Number(process.REACT_APP_PORT), clientID);\n    client.onConnectionLost = this.onConnectionLost;\n    client.onConnect = this.onConnect;\n    client.onMessageArrived = this.onMessageArrived\n\nconstuctor(){\n    client.connect({\n\n      onSuccess: this.onConnect,\n      userName: process.REACT_APP_DB_NAME,\n      password: process.REACT_APP_PASSWORD,\n      onFailure: this.onConnectionLost,\n\n    });\n    this.state = {\n\n      client,\n      value: 0,\n    }\n\n  }\n\n  componentDidMount() {\n\n    this.onConnect = this.onConnect.bind(this);\n\n  }\n\n  onConnect = () => {\n    const { client } = this.state;\n    console.log("Connected!!!!");\n\n  }\n\n  onConnectionLost = (responseObject) => {\n    if (responseObject.errorCode !== 0) {\n      console.log("onConnectionLost : " + responseObject.errorMessage);\n    }\n  }\n\n  sendIntensity = () => {\n\n    const { client } = this.state;\n\n   client.publish(this.props.topic, this.state.value.toString(), 1)\n\n  }\n
\n

console log:

\n
Connected!!!!\nConnected!!!!\nonConnectionLost : AMQJS0007E Socket error:undefined.\nonConnectionLost : AMQJS0007E Socket error:undefined.\nConnected!!!!\n
\n

Error :

\n
\n

Error: AMQJS0011E Invalid state not connected. at\nnode_modules\\react-native\\Libraries\\LogBox....

\n
\n", "Title": "How to handle the connection in MQTT using Paho", "Tags": "|mqtt|aws-iot|paho|publish-subscriber|", "Answer": "
 const client = new Paho.MQTT.Client(process.REACT_APP_HOST, Number(process.REACT_APP_PORT), clientID);\n    client.onConnectionLost = this.onConnectionLost;\n    client.onMessageArrived = this.onMessageArrived\n\nconstuctor(){\n    client.connect({\n\n      onSuccess: this.onConnect,\n      userName: process.REACT_APP_DB_NAME,\n      password: process.REACT_APP_PASSWORD,\n      onFailure: this.onConnectionLost,\n\n    });\n
\n

You seem to define the handle to client.onConnect twice, which might account for connecting twice with the same client instance, hence the same client id and angry result.

\n" }, { "Id": "6263", "CreationDate": "2022-04-28T02:10:17.753", "Body": "

I want to wire up a few DS18B20 sensors to measure various points across a span of about 10 meters. (I intend to hook this up to an ESP8266 and use Tasmota, but I think that's not really relevant to my question).

\n

I am uncertain if I need to have 1 4k7 pull up resistor for the whole chain of devices, or a 4.7k pull-up resistor for each device (located close to the device).

\n

My google foo is weak, and I've seen both layouts. I could swear that when I worked with these sensors years ago I had 1 resistor per device, but the more I think about it, the less sense this seems to make to me.

\n", "Title": "How to wire multiple DS18B20 temperature sensors?", "Tags": "|sensors|", "Answer": "

You use one weak resistor (for instance, the data sheet shows a 4.7k\u2126 resistor) for all the devices in one chain.

\n

The devices use Maxim's "1-Wire" protocol - all devices share a single wire for transmitting and receiving data. To make this work, devices operate in a tri-state mode so that multiple devices don't try to power the wire at the same time. The pull-up resistor is necessary to provide a readable high signal to the CPU.

\n

You can learn more in the DS18B20 data sheet.

\n" }, { "Id": "6271", "CreationDate": "2022-05-02T16:17:02.660", "Body": "

I have two LoPy4 Dev boards. I am using this tutorial in order to (successfully) communicate with the two boards. After receiving data in LoRa communication I am trying to also publish this data in an MQTT Broker. I have followed the related MQTT tutorial which is provided by Pycom, here (also placed mqqt.py file under lib/ folder)\nbut every time that I am trying to run the code I get OSError: [Errno 104] ECONNRESET.

\n

Has anyone faced any similar issues ?

\n

This is the main file that I am using on the LoPy:

\n
from mqtt import MQTTClient\n# from mqtt import MQTTClient_lib as MQTTClient\nfrom network import WLAN\nimport machine\nimport time\n\ndef sub_cb(topic, msg):\n   print(msg)\n\nwlan = WLAN(mode=WLAN.STA)\nwlan.connect("XXXXX", auth=(WLAN.WPA2, "XXXXXXXXXXXXXX"), timeout=5000)\n\nwhile not wlan.isconnected():  \n    machine.idle()\nprint("Connected to WiFi\\n")\n\nclient_id = "python-mqtt-1"\n\nclient = MQTTClient(client_id, '0.0.0.0', user="guest", password="guest", port=1883)\nclient.set_callback(sub_cb)\nclient.connect()\nclient.subscribe(topic="python.mqtt")\n\nwhile True:\n    x = 100\n    while x > 0:\n        client.publish(topic="python.mqtt", msg=str(x))\n        client.check_msg()\n        x -= 1\n        time.sleep(2)\n\n    time.sleep(100)\n
\n

This is the docker compose for the rabbitmq

\n
services:\n  rabbitmq:\n    image: rabbitmq:3.9-management\n    container_name: rabbitmq\n    ports:\n      - 1883:1883\n      - 5672:5672\n      - 15672:15672\n    volumes:\n      - ./conf/myrabbit.conf:/etc/rabbitmq/myrabbit.conf\n    command: '/bin/bash -c "rabbitmq-plugins enable rabbitmq_mqtt; rabbitmq-server"'\n
\n

I also created a python script that publishes MQTT messages to the same broker

\n
import random\nimport time\n\nfrom paho.mqtt import client as mqtt_client\n\n\nbroker = '0.0.0.0'\nport = 1883\ntopic = "python.mqtt"\n# generate client ID with pub prefix randomly\nclient_id = f'python-mqtt-{random.randint(0, 1000)}'\n\ndef connect_mqtt():\n    def on_connect(client, userdata, flags, rc):\n        if rc == 0:\n            print("Connected to MQTT Broker!")\n        else:\n            print("Failed to connect, return code %d\\n", rc)\n\n    client = mqtt_client.Client(client_id)\n    client.username_pw_set('guest', 'guest')\n    client.on_connect = on_connect\n    client.connect(broker, port)\n    return client\n\n\ndef publish(client):\n    msg_count = 0\n    while True:\n        if msg_count == 50 :\n            break\n        time.sleep(0.1)\n        msg = f"messages: {msg_count}"\n        result = client.publish(topic, msg)\n        # result: [0, 1]\n        status = result[0]\n        if status == 0:\n            print(f"Send `{msg}` to topic `{topic}`")\n        else:\n            print(f"Failed to send message to topic {topic}")\n        msg_count += 1\n\n\ndef run():\n    client = connect_mqtt()\n    client.loop_start()\n    publish(client)\n\n\nif __name__ == '__main__':\n    run()\n
\n

When uploading the code to the LoPy I keep getting ECONNRESET, however the python script publishes successfully on the broker.

\n", "Title": "LoPy4 MQTT Example does not Work", "Tags": "|mqtt|lora|", "Answer": "

I was quite naive. I was using localhost on the LoPy instead of the broker IP. Replacing that fixed the issue.\nThanks for all the answers!

\n" }, { "Id": "6288", "CreationDate": "2022-05-20T23:04:41.747", "Body": "

I have lost the remote for a Google TV device and after a factory reset I can't connect to it with the Google Home app on my phone. How do I make it work?

\n", "Title": "How to set up Google TV without remote", "Tags": "|chromecast|", "Answer": "

Got it to work with Google Home and Google TV apps entirely. After I did the same thing... factory reset... no remote. I was stuck at the same screen of pairing the remote. If you press the button on the back of the Chromecast... just once... don't hold it down, it goes into a pairing mode. I then opened the Google TV app and selected the small TV icon in the lower right hand corner. It showed the Chromecaset as an available device. I selected it and it connected. Then, I just mashed the center button and got past the pairing screen. After that, I just walked through the rest of the startup procedure. It asked me to repair somewhere in the middle while downloading apps. Worked!

\n" }, { "Id": "6303", "CreationDate": "2022-05-31T04:37:06.680", "Body": "

Supposing I want the application firmware in my LoRaWAN end device to behave differently when it's in a given geolocation and that geolocation can be defined by the local Helium base station or cell ("gateway" in the LoRaWAN parlance?) that the end device is sitting in.

\n

Suppose also that I'm happy to send one uplink and receive one downlink message in order to determine the gateway ID.

\n

Section 3.2 of the LoRaWAN spec has only this to say about the downlink PHY frame:

\n
\n

Downlink PHY: Preamble PHDR PHDR_CRC PHYPayload

\n
\n

Is there also a gateway ID in there somewhere? Or is there in one of the other layers?

\n

If not, could the application server know this from the uplink message and send it back as part of the downlink message payload?

\n", "Title": "Can a LoRaWAN end device know an ID of the gateway to which it's connected?", "Tags": "|lorawan|", "Answer": "

Yes - there is a gatewayID which you receive on every uplionk message but I see two issues:

\n
    \n
  1. Your node might be in the range of more than one gateway and you will receive an array of gateway objects
  2. \n
  3. I can not imagine that it would be feasable to do a downlink after every uplink
  4. \n
\n

BR Cademis

\n" }, { "Id": "6318", "CreationDate": "2022-06-12T15:46:11.570", "Body": "

Roomba can use IFTTT to run the vacuum when a user's phone leaves a geofenced area. And you used to be able to use Life360 + IFTTT to trigger your smart vacuum to run when everyone has left the house (not just one person). However, Life360 ended their partnership with IFTTT, and since then there has been no obvious way to do this.

\n

How can I trigger my Roomba to vacuum when everyone leaves the house?

\n

Some things I've tried:

\n\n", "Title": "How to start the Roomba when everyone leaves?", "Tags": "|smart-home|ifttt|", "Answer": "

This can be done using Home Assistant quite easily. If you haven't looked into it yet, you may want to here.

\n

I find that geofencing in home assistant is a bit spotty, but it does seem to mostly work. I think a better way of doing this is by installing the Home Assistant app on each residents phone and have an automation trigger when the SSID of each phone changes to something other than your home wifi. I use this to make sure my lights are off when I leave home.

\n" }, { "Id": "6333", "CreationDate": "2022-06-23T05:41:33.623", "Body": "

I am trying to connect ESP8266 using tasmota, to a cloud mqtt broker and getting message

\n
 MQT: Connect failed to broker.hivemq.com:1883, rc -2. Retry in 10 sec\n
\n

Any ideas?

\n", "Title": "tasmota on ESP8266 is not connecting to cloud mqtt server", "Tags": "|mqtt|tasmota|", "Answer": "

Well it turns out that in versions after 9.1 Tasmota has changed the default value of MQTTWifiTimeout from 2000ms to 200ms and because ESP8266 is not a very fast MCU this low timeout value is too short.\nSo setting:

\n
MqttWifiTimeout 1000\n
\n

from console does the trick.

\n" }, { "Id": "6339", "CreationDate": "2022-07-01T00:24:42.630", "Body": "

I'm trying to get telemetry from my Solax hybrid inverter which has a "Pocket WIFI" device installed without the information going across the internet (its my data, its sent unencrypted and stored in a country whose values I do not share, and also giving them the right to meddle with my device at my risk in the portal usage terms).

\n

I have been able to connect to the Solax AP and, by pretending to be the Solax App connected to "local" I can get a dump of raw data as a CSV which I think I will be able to decode. That said, I really don't want to have to set up a dedicated AP client just so I can get this information.

\n

I see that the device is also connected to my LAN, and port 80 is open, but this does not behave like a web server, and throwing the request I made to 5.8.8.8 (ie when connected to the Pocket WIFI AP) did not work.

\n

It also looks like it is sending telemetry unencrypted but not in an immediately recogniseable format to a specific IP address (47.254.152.103, port 2901) that it looks like I can change in the configuration, but I can't find details of the protocol.

\n

Does anyone know how I can access this data without connecting specifically to the Pocket WIFI AP, and without the data going to the cloud?

\n", "Title": "Getting telemetry from Solax Inverter Pocket WIFI", "Tags": "|protocols|", "Answer": "

This worked for me local information every 5sec from X1 hybrid G4 with pocket WIFI to home assistant (if you use it). Not sure if it uses the API to get the info, but worth checking out

\n

https://community.home-assistant.io/t/solax-x1-hybrid-g4-local-cloud-api/506172

\n" }, { "Id": "6347", "CreationDate": "2022-07-08T04:15:36.153", "Body": "

I have troubles connecting SIM7070G (SIM7000 family) to AWS over built-in MQTT using AWS certificates. I've succeeded previously using the module only as a cellular gateway, but running FreeRTOS+mbedTLS on Windows simulator, however now trying to offload SSL to the cellular module and seems like I'm hitting a wall.\nI've followed the example flow log from here, however no success.

\n

So my flow is:

\n\n

In order to see if they work, I've used mosquitto_sub and sent a test message from AWS MQTT Test client:

\n
mosquitto_sub.exe --cert certificate.crt --key private.key --cafile LegacyRoot.pem -h aaaaxi07e85ykv.iot.us-west-2.amazonaws.com -p 8883 -t "test"\n{\n  "message": "Hello from AWS IoT console"\n}\n
\n

Seems like all goes well.

\n

I've uploaded the certificates to "customer" directory to the SIM7070G module using QPST EFS Explorer utility:\n\"enter

\n

Then verified that the module can find the files:

\n
7/7/2022 23:19:12.106 [TX] - AT+CFSINIT<CR><LF>\n\n7/7/2022 23:19:12.118 [RX] - AT+CFSINIT<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:19:14.404 [TX] - AT+CFSGFIS=3,"LegacyRoot.pem"<CR><LF>\n\n7/7/2022 23:19:14.415 [RX] - AT+CFSGFIS=3,"LegacyRoot.pem"<CR>\n<CR><LF>\n+CFSGFIS: 1758<CR><LF>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:20:00.773 [TX] - AT+CFSGFIS=3,"certificate.crt"<CR><LF>\n\n7/7/2022 23:20:00.778 [RX] - AT+CFSGFIS=3,"certificate.crt"<CR>\n<CR><LF>\n+CFSGFIS: 1224<CR><LF>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:20:03.276 [TX] - AT+CFSGFIS=3,"private.key"<CR><LF>\n\n7/7/2022 23:20:03.288 [RX] - AT+CFSGFIS=3,"private.key"<CR>\n<CR><LF>\n+CFSGFIS: 1679<CR><LF>\n<CR><LF>\nOK<CR><LF>\n
\n

So the certificates are uploaded, lets now connect:

\n
7/7/2022 22:55:19.304 [TX] - AT+CNACT=0,1<CR><LF>\n\n7/7/2022 22:55:19.317 [RX] - AT+CNACT=0,1<CR>\n<CR><LF>\nOK<CR><LF>\n<CR><LF>\n+APP PDP: 0,ACTIVE<CR><LF>\n\n7/7/2022 22:55:21.559 [TX] - AT+CNACT?<CR><LF>\n\n7/7/2022 22:55:21.571 [RX] - AT+CNACT?<CR>\n<CR><LF>\n+CNACT: 0,1,"10.155.172.130"<CR><LF>\n+CNACT: 1,0,"0.0.0.0"<CR><LF>\n+CNACT: 2,0,"0.0.0.0"<CR><LF>\n+CNACT: 3,0,"0.0.0.0"<CR><LF>\n<CR><LF>\nOK<CR><LF>\n
\n

Lets configure the certificates and connect to AWS:

\n
7/7/2022 23:50:39.604 [TX] - AT+CSSLCFG="convert",2,"LegacyRoot.pem"<CR><LF>\n\n7/7/2022 23:50:39.614 [RX] - AT+CSSLCFG="convert",2,"LegacyRoot.pem"<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:42.084 [TX] - AT+CSSLCFG="convert",1,"certificate.crt","private.key"<CR><LF>\n\n7/7/2022 23:50:42.097 [RX] - AT+CSSLCFG="convert",1,"certificate.crt","private.key"<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:44.590 [TX] - AT+CSSLCFG="sslversion",0,3<CR><LF>\n\n7/7/2022 23:50:44.603 [RX] - AT+CSSLCFG="sslversion",0,3<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:46.699 [TX] - AT+SMSSL=1,"LegacyRoot.pem","certificate.crt"<CR><LF>\n\n7/7/2022 23:50:46.712 [RX] - AT+SMSSL=1,"LegacyRoot.pem","certificate.crt"<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:48.199 [TX] - AT+SMCONF=url,"aaaaxi07e85ykv.iot.us-west-2.amazonaws.com","8883"<CR><LF>\n\n7/7/2022 23:50:48.211 [RX] - AT+SMCONF=url,"aaaaxi07e85ykv.iot.us-west-2.amazonaws.com","8883"<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:49.501 [TX] - AT+SMCONF="clientid","basicPubSub"<CR><LF>\n\n7/7/2022 23:50:49.514 [RX] - AT+SMCONF="clientid","basicPubSub"<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:50.781 [TX] - AT+SMCONF="KEEPTIME",60<CR><LF>\n\n7/7/2022 23:50:50.794 [RX] - AT+SMCONF="KEEPTIME",60<CR>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:50:53.411 [TX] - AT+SMCONN<CR><LF>\n\n7/7/2022 23:50:53.423 [RX] - AT+SMCONN<CR>\n<CR><LF>\nERROR<CR><LF>\n
\n

And it's an error... I've tried a different certificate, but it did not work either.

\n

Some debug commands:

\n
7/7/2022 23:52:34.933 [TX] - AT+SMCONF?<CR><LF>\n\n7/7/2022 23:52:34.945 [RX] - AT+SMCONF?<CR>\n<CR><LF>\n+SMCONF: <CR>\n<CR><LF>\nCLIENTID: "basicPubSub"<CR>\n<CR><LF>\nURL: "aaaaxi07e85ykv.iot.us-west-2.amazonaws.com",8883<CR>\n<CR><LF>\nKEEPTIME: 60<CR>\n<CR><LF>\nUSERNAME: ""<CR>\n<CR><LF>\nPASSWORD: ""<CR>\n<CR><LF>\nCLEANSS: 0<CR>\n<CR><LF>\nQOS: 0<CR>\n<CR><LF>\nTOPIC: ""<CR>\n<CR><LF>\nMESSAGE: ""<CR>\n<CR><LF>\nRETAIN: 0<CR>\n<CR><LF>\nSUBHEX: 0<CR>\n<CR><LF>\nASYNCMODE: 0<CR><LF>\n<CR><LF>\nOK<CR><LF>\n\n7/7/2022 23:52:41.506 [TX] - AT+SMSSL?<CR><LF>\n\n7/7/2022 23:52:41.518 [RX] - AT+SMSSL?<CR>\n<CR><LF>\n+SMSSL: 1,"LegacyRoot.pem","certificate.crt"<CR><LF>\n<CR><LF>\nOK<CR><LF>\n
\n

I've also tried to pass AT+CSSLCFG parameters without quotes and that did not help.\nI've tried to follow the thread here, but seems like I'm doing everything right.\nNote: AWS endpoint is on the AWS West server and certificate region is supported.

\n

Any hints, please?

\n

Thanks!

\n", "Title": "Connecting cellular module SIM7070G to AWS MQTT", "Tags": "|mqtt|aws|", "Answer": "

I've figured it out. The module starts with default time of 1980-Jan-01 and AWS rejects the time mismatch while authenticating. I had to set the correct time with AT+CCLK and MQTT connected successfully or configure the module to pull the time over NTP using AT+CNTP="pool.ntp.org",-16,0,0 (this is for EST time zone).

\n" }, { "Id": "6369", "CreationDate": "2022-07-26T05:29:33.490", "Body": "

I have a LG WebOS TV (2021) which has no connection to the internet. Every few months I activate the internet connection in the firewall (OpenWrt) and check if updates are available. A somewhat manual and unfortunate solution.

\n

Is there a mailing list or any other kind of notifications for WebOS release updates and security fixes?

\n", "Title": "Notifications for LG WebOS updates and security fixes (e.g. a mailing list)", "Tags": "|security|smart-tv|", "Answer": "

Have you registered your product ? Product Registration
\nTechnically, if you have registered it, they should send you an email when there's updates (improvements) to that product - not just recalls.
\nNow, if you're registered and they are still not sending you updates, you should send them feedback via Feedback link or email their support and ask for this feature.
\nAre you worried about leaving the connection open all the time ? That's understandable, I guess...

\n" }, { "Id": "6373", "CreationDate": "2022-07-30T07:02:02.310", "Body": "

I am trying to work with a few GPS-enabled tracking devices one of them being a TK-303 GPS tracker that uses SIM-powered GPRS to access the internet. currently, I can receive messages on my python TCP socket server. but when I tried initiating a connection to the devices using the public address that is broadcasted when the device sends a message, the connection wouldn't connect. my question is how is this done, I would like insight on how to go about doing this. thank you.

\n", "Title": "How to initiate connection to a GPRS enabled tracking device", "Tags": "|tracking-devices|python|", "Answer": "

Data transfers have to be initiated by the GPRS module; your cellular provider will have a firewall that blocks all connection requests that originate from the internet.

\n

The usual way round this is to send an SMS to the tracker, which can either return its position in an SMS, or be triggered to initiate contact with a server.

\n" }, { "Id": "6388", "CreationDate": "2022-08-12T17:33:58.720", "Body": "

My Ember mug has a firmware update available, but when I try to update it the process fails at 50% during a reconnection step. How do I make sure the update is able to finish?

\n", "Title": "Why can't I update my Ember mug's firmware?", "Tags": "|smart-home|hardware|kitchen-appliances|", "Answer": "

After the recommended reset, I tried it again not on the charger. That worked for me.

\n" }, { "Id": "6391", "CreationDate": "2022-08-18T05:09:46.187", "Body": "

I want to use LoRa RA-02 based on SX1278 with 433MHz.

\n

In some country I can send data in 10db max and I will connect 1/4wavelength (17cm) of antena.

\n

In RA-02 write high sensitivity of over -148dBm

\n

How can I calculate the maximum distance between transmitter and receiver?

\n", "Title": "Calculation LoRa range with RA02 chip", "Tags": "|lora|", "Answer": "

First and foremost, any theoretical calculations are only valid if you have line-of-sight (LoS) between transmitter and receiver. As soon as you have an obstacle, it is nearly impossible to compute it, you'll only be able to measure it to take into account the related losses.

\n

Second, you need not only direct line of sight to be free of obstacles, but you need the Fresnel zone to be free as well. At 433\u00a0MHz, the radius of the Fresnel zone at 1\u00a0km is 13\u00a0meters. At 100\u00a0km it is over 130\u00a0meters.

\n

Taking into account this and the curvature of the Earth, that means that for any significant distance, even on flat terrain without any obstacles, you need one or both of the devices to be quite high above ground. All LoRa distance records have involved either balloons or devices very high above ground (on top of mountains or towers).

\n

Now, for the calculation...

\n

What you need is a link budget calculator, which uses the link budget equation to compute the result. There are plenty on the web, though they may not work in the direction you need (they are often distance -> budget rather than budget -> distance), but you can always do a binary search.

\n

The inputs you will need are:

\n\n

Counting 0 for the losses in cabling etc, the link budget without free space path loss is 10 + 0 + 2.1 + 2.1 + 0 - -148 or 162.2 dB

\n

So your max distance is where the free space path loss (FSPL) is 162.2 dB. FSPL in dB is 20log10(d) + 20log10(f) - 147.55 (with d in meters and f in Hz)

\n

With f = 433 MHz, FSPL = 20log10(d) + 25.18, which means d = 10^((FSPL - 25.18)/20) or a bit over 7000 km.

\n

Note that there are quite a few assumptions here:

\n\n

In practice, what will determine the max distance is most likely to be the terrain. There are tools which allow you to enter points on a map and see if a given link is feasible. Other tools allow you to see the theoretical coverage from a given point.

\n

Note, again, that all this is only valid if both devices are outdoors with strictly no obstacles between them. Anything where one or both devices are indoors, or there are any obstacles (terrain, trees, buildings) will very, very, very significantly reduce the max distance (it can go down to a few dozen meters!).

\n" }, { "Id": "6423", "CreationDate": "2022-09-29T11:01:44.730", "Body": "

As I understand, MQTT QoS levels are applicable on the connection between Publisher and Broker or Broker and Subscriber. QoS levels are nothing to do with the connection between Publisher and Subscriber directly.\nThat means, a Publisher can't be sure that if a specific Subscriber received the message or not. The only thing the Publisher know that the Broker has received the message if appropriate QoS level is provided. Is that true?

\n", "Title": "Is there a way to be sure that the message is received by the subscriber in MQTT", "Tags": "|mqtt|", "Answer": "

There is nothing in the MQTT protocol for end to end delivery notification.

\n

This is because when a message is published there may be

\n\n

This means that end to end notification would have to handle all 3 of the use cases, where a notification could:

\n\n

If you need end to end then you need to publish a second acknowledgement message from the receiving client.

\n" }, { "Id": "6434", "CreationDate": "2022-10-11T14:53:36.747", "Body": "

I am currently working on a project that detects specific objects in a video stream. If a particular object is detected, an alarm of some sort will be rung. An HTML page will display the object detection output and the name of the object detected. I have used flask with python for the HTML page. The video from the ESP32 Camera is given to a python script through a URL; then, the output will be displayed on the HTML page. I am not using the camera webserver rather using a flask web app.\nI have completed the object detection part and the object detection output successfully. But I can't seem to figure out a way to control the alarm system attached to the ESP32-Cam through the python script.\nThis is my python code

\n
from pickletools import read_uint1\nfrom flask import Flask, render_template, Response\nimport cv2\nimport cvzone\n\n\n#url = 'C:/Users/Mukesh/Downloads/videoplayback.mp4'\nurl = '192.168.35.7:81/stream'\n\nclassNames = []\nclassFile = 'C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/coco.names'\nwith open(classFile, 'rt') as f:\n    classNames = f.read().split('\\n')\nconfigPath = 'C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'\nweightsPath = "C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/frozen_inference_graph.pb"\nnet = cv2.dnn_DetectionModel(weightsPath, configPath)\nnet.setInputSize(320, 320)\nnet.setInputScale(1.0 / 127.5)\nnet.setInputMean((127.5, 127.5, 127.5))\nnet.setInputSwapRB(True)\napp = Flask(__name__)\nthres = 0.55\nnmsThres = 0.2\ncap = cv2.VideoCapture(url)\ncap.set(3, 640)\ncap.set(4, 480)\n\ndef gen_frames():\n    while 1:\n        isTrue, img = cap.read()\n        img = cv2.flip(img, 1)\n        if img is not None:\n            classIds, confs, bbox = net.detect(\n                img, confThreshold=thres, nmsThreshold=nmsThres)\n            try:\n                for classId, conf, box in zip(classIds.flatten(), confs.flatten(), bbox):\n                    id = classNames[classId - 1]\n                    #Send ID to HTML page\n                    print(id)\n                    if id=='drone':\n                        cvzone.cornerRect(img, box)\n                        cv2.putText(img, f'{classNames[classId - 1].upper()} {round(conf * 100, 2)}',\n                                (box[0] + 10, box[1] +\n                                 30), cv2.FONT_HERSHEY_COMPLEX_SMALL,\n                                1, (0, 255, 0), 2)\n                frame = img\n                ret, buffer = cv2.imencode('.jpg', frame)\n                frame = buffer.tobytes()\n                cv2.imshow('',img)\n                if cv2.waitKey(20)&0xff==ord(' '):\n                    break\n                yield (b'--frame\\r\\n'\n                       b'Content-Type: image/jpeg\\r\\n\\r\\n' + frame + b'\\r\\n')\n            except:\n                pass\n            cv2.waitKey(1)\n        else:\n            break\n\n\n@app.route('/video_feed')\ndef video_feed():\n    return Response(gen_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')\n\n\n@app.route('/')\ndef home():\n    """Video streaming home page."""\n    return render_template('Home.html')\n\n\nif __name__ == '__main__':\n    app.run(debug=True)\n\n
\n

This is the webpage

\n
<!DOCTYPE html>\n<html lang="en">\n\n    <head>\n        <title>Home</title>\n        <meta charset="UTF-8">\n        <meta http-equiv="X-UA-Compatible" content="IE=edge">\n        <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">\n        <link href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet">\n        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"\n        integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">\n        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js" type="text/javascript"></script>\n        <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.bundle.min.js"></script>\n        <style>\n            body {\n                font-family: 'Gill Sans', 'Gill Sans MT', Calibri, 'Trebuchet MS', sans-serif;\n                background-image: linear-gradient(to right, #2E3192, #1BFFFF);\n                background-position: center;\n                background-repeat: no-repeat;\n                background-size: cover;\n                position: relative;\n                min-height: 100vh;\n                display: flex;\n                flex-flow: row;\n            }\n\n            .booth {\n                flex: 1 1 auto;\n            }\n\n            .f2 {\n                flex: 1 1 auto;\n                display: flex;\n                flex-flow: column-reverse;\n            }\n\n            .f2 .text {\n                flex: 0 1 auto;\n            }\n\n            .f2 .frame {\n                flex: 1 1 auto;\n                position: relative;\n                display: flex;\n                flex-flow: column;\n            }\n\n            .f2 .frame #click-photo {\n                flex: 0 1 auto;\n            }\n\n            .f2 .frame #canvas {\n                flex: 1 1 auto;\n            }\n\n            #canvas {\n                background-color: red;\n            }\n\n        </style>\n\n    </head>\n\n    <body>\n        <div class="booth">\n            <!--video id="video" width="100%" height="100%" autoplay></video-->\n            <img id="video" src="{{ url_for('video_feed') }}" width="100%">\n        </div>\n        <div class="f2">\n            <div class="frame">\n                <canvas id="canvas"></canvas>\n                <button id="click-photo">Click Photo</button>\n            </div>\n            <div class="text">\n                <h2 id="status" src="{{ url_for('obj')}}"></h2>\n                <!---Drone status--->\n                <h2 id="cam"></h2>\n                <!--Camera location-->\n            </div>\n        </div>\n        <script>\n            let video = document.querySelector("#video");\n            let click_button = document.querySelector("#click-photo");\n            let canvas = document.querySelector("#canvas");\n\n            click_button.addEventListener('click', function () {\n                canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);\n                let image_data_url = canvas.toDataURL('image/jpeg');\n                console.log(image_data_url);\n                //document.getElementById("status").innerHTML = " Drone: detected";\n                document.getElementById("cam").innerHTML = "Camera: 1";\n            });\n            var stop = function () {\n                var stream = video.srcObject;\n                var tracks = stream.getTracks();\n                for (var i = 0; i < tracks.length; i++) {\n                    var track = tracks[i];\n                    track.stop();\n                }\n                video.srcObject = null;\n            }\n            var start = function () {\n                var video = document.getElementById('video'),\n                    vendorUrl = window.URL || window.webkitURL;\n                if (navigator.mediaDevices.getUserMedia) {\n                    navigator.mediaDevices.getUserMedia({ video: true })\n                        .then(function (stream) {\n                            video.srcObject = stream;\n                        }).catch(function (error) {\n                            console.log("Something went wrong!");\n                        });\n                }\n            }\n            $(function () {\n                start();\n            });\n        </script>\n    </body>\n\n</html>\n\n
\n

This is the ESP32 Camera code

\n
#include "esp_camera.h"\n#include <WiFi.h>\n#include "esp_timer.h"\n#include "img_converters.h"\n#include "Arduino.h"\n#include "fb_gfx.h"\n#include "soc/soc.h" //disable brownout problems\n#include "soc/rtc_cntl_reg.h"  //disable brownout problems\n#include "esp_http_server.h"\n#include <ESP32Servo.h>\n//Replace with your network credentials\nconst char* ssid = "Mukesh";\nconst char* password = "qwertyuiop";\n\n#define PART_BOUNDARY "123456789000000000000987654321"\n\n// This project was tested with the AI Thinker Model, M5STACK PSRAM Model and M5STACK WITHOUT PSRAM\n#define CAMERA_MODEL_AI_THINKER\n//#define CAMERA_MODEL_M5STACK_PSRAM\n//#define CAMERA_MODEL_M5STACK_WITHOUT_PSRAM\n\n// Not tested with this model\n//#define CAMERA_MODEL_WROVER_KIT\n\n#if defined(CAMERA_MODEL_WROVER_KIT)\n  #define PWDN_GPIO_NUM    -1\n  #define RESET_GPIO_NUM   -1\n  #define XCLK_GPIO_NUM    21\n  #define SIOD_GPIO_NUM    26\n  #define SIOC_GPIO_NUM    27\n\n  #define Y9_GPIO_NUM      35\n  #define Y8_GPIO_NUM      34\n  #define Y7_GPIO_NUM      39\n  #define Y6_GPIO_NUM      36\n  #define Y5_GPIO_NUM      19\n  #define Y4_GPIO_NUM      18\n  #define Y3_GPIO_NUM       5\n  #define Y2_GPIO_NUM       4\n  #define VSYNC_GPIO_NUM   25\n  #define HREF_GPIO_NUM    23\n  #define PCLK_GPIO_NUM    22\n\n#elif defined(CAMERA_MODEL_M5STACK_PSRAM)\n  #define PWDN_GPIO_NUM     -1\n  #define RESET_GPIO_NUM    15\n  #define XCLK_GPIO_NUM     27\n  #define SIOD_GPIO_NUM     25\n  #define SIOC_GPIO_NUM     23\n\n  #define Y9_GPIO_NUM       19\n  #define Y8_GPIO_NUM       36\n  #define Y7_GPIO_NUM       18\n  #define Y6_GPIO_NUM       39\n  #define Y5_GPIO_NUM        5\n  #define Y4_GPIO_NUM       34\n  #define Y3_GPIO_NUM       35\n  #define Y2_GPIO_NUM       32\n  #define VSYNC_GPIO_NUM    22\n  #define HREF_GPIO_NUM     26\n  #define PCLK_GPIO_NUM     21\n\n#elif defined(CAMERA_MODEL_M5STACK_WITHOUT_PSRAM)\n  #define PWDN_GPIO_NUM     -1\n  #define RESET_GPIO_NUM    15\n  #define XCLK_GPIO_NUM     27\n  #define SIOD_GPIO_NUM     25\n  #define SIOC_GPIO_NUM     23\n\n  #define Y9_GPIO_NUM       19\n  #define Y8_GPIO_NUM       36\n  #define Y7_GPIO_NUM       18\n  #define Y6_GPIO_NUM       39\n  #define Y5_GPIO_NUM        5\n  #define Y4_GPIO_NUM       34\n  #define Y3_GPIO_NUM       35\n  #define Y2_GPIO_NUM       17\n  #define VSYNC_GPIO_NUM    22\n  #define HREF_GPIO_NUM     26\n  #define PCLK_GPIO_NUM     21\n\n#elif defined(CAMERA_MODEL_AI_THINKER)\n  #define PWDN_GPIO_NUM     32\n  #define RESET_GPIO_NUM    -1\n  #define XCLK_GPIO_NUM      0\n  #define SIOD_GPIO_NUM     26\n  #define SIOC_GPIO_NUM     27\n\n  #define Y9_GPIO_NUM       35\n  #define Y8_GPIO_NUM       34\n  #define Y7_GPIO_NUM       39\n  #define Y6_GPIO_NUM       36\n  #define Y5_GPIO_NUM       21\n  #define Y4_GPIO_NUM       19\n  #define Y3_GPIO_NUM       18\n  #define Y2_GPIO_NUM        5\n  #define VSYNC_GPIO_NUM    25\n  #define HREF_GPIO_NUM     23\n  #define PCLK_GPIO_NUM     22\n#else\n  #error "Camera model not selected"\n#endif\n// #define Ser_1 14\n// #define Ser_2 15\n// #define ser_step 5\n// Servo sn1;\n// Servo sn2;\n// Servo s1;\n// Servo s2;\n// int pos1=0;\n// int pos2=0;\n\nstatic const char* _STREAM_CONTENT_TYPE = "multipart/x-mixed-replace;boundary=" PART_BOUNDARY;\nstatic const char* _STREAM_BOUNDARY = "\\r\\n--" PART_BOUNDARY "\\r\\n";\nstatic const char* _STREAM_PART = "Content-Type: image/jpeg\\r\\nContent-Length: %u\\r\\n\\r\\n";\n\nhttpd_handle_t stream_httpd = NULL;\n\nstatic esp_err_t stream_handler(httpd_req_t *req){\n  camera_fb_t * fb = NULL;\n  esp_err_t res = ESP_OK;\n  size_t _jpg_buf_len = 0;\n  uint8_t * _jpg_buf = NULL;\n  char * part_buf[64];\n\n  res = httpd_resp_set_type(req, _STREAM_CONTENT_TYPE);\n  if(res != ESP_OK){\n    return res;\n  }\n\n  while(true){\n    fb = esp_camera_fb_get();\n    if (!fb) {\n      Serial.println("Camera capture failed");\n      res = ESP_FAIL;\n    } else {\n      if(fb->width > 400){\n        if(fb->format != PIXFORMAT_JPEG){\n          bool jpeg_converted = frame2jpg(fb, 80, &_jpg_buf, &_jpg_buf_len);\n          esp_camera_fb_return(fb);\n          fb = NULL;\n          if(!jpeg_converted){\n            Serial.println("JPEG compression failed");\n            res = ESP_FAIL;\n          }\n        } else {\n          _jpg_buf_len = fb->len;\n          _jpg_buf = fb->buf;\n        }\n      }\n    }\n    if(res == ESP_OK){\n      size_t hlen = snprintf((char *)part_buf, 64, _STREAM_PART, _jpg_buf_len);\n      res = httpd_resp_send_chunk(req, (const char *)part_buf, hlen);\n    }\n    if(res == ESP_OK){\n      res = httpd_resp_send_chunk(req, (const char *)_jpg_buf, _jpg_buf_len);\n    }\n    if(res == ESP_OK){\n      res = httpd_resp_send_chunk(req, _STREAM_BOUNDARY, strlen(_STREAM_BOUNDARY));\n    }\n    if(fb){\n      esp_camera_fb_return(fb);\n      fb = NULL;\n      _jpg_buf = NULL;\n    } else if(_jpg_buf){\n      free(_jpg_buf);\n      _jpg_buf = NULL;\n    }\n    if(res != ESP_OK){\n      break;\n    }\n    //Serial.printf("MJPG: %uB\\n",(uint32_t)(_jpg_buf_len));\n  }\n  return res;\n}\n\nvoid startCameraServer(){\n  httpd_config_t config = HTTPD_DEFAULT_CONFIG();\n  config.server_port = 80;\n\n  httpd_uri_t index_uri = {\n    .uri       = "/",\n    .method    = HTTP_GET,\n    .handler   = stream_handler,\n    .user_ctx  = NULL\n  };\n\n  //Serial.printf("Starting web server on port: '%d'\\n", config.server_port);\n  if (httpd_start(&stream_httpd, &config) == ESP_OK) {\n    httpd_register_uri_handler(stream_httpd, &index_uri);\n  }\n}\n\nvoid setup() {\n  WRITE_PERI_REG(RTC_CNTL_BROWN_OUT_REG, 0); //disable brownout detector\n  // s1.setPeriodHertz(50);\n  // s2.setPeriodHertz(50);\n  // sn1.attach(2,1000,2000);\n  // sn2.attach(13,1000,2000);\n  // s1.attach(Ser_1,1000,2000);\n  // s2.attach(Ser_2,1000,2000);\n  // s1.write(pos1);\n  // s2.write(pos2);\n  Serial.begin(115200);\n  Serial.setDebugOutput(false);\n\n  camera_config_t config;\n  config.ledc_channel = LEDC_CHANNEL_0;\n  config.ledc_timer = LEDC_TIMER_0;\n  config.pin_d0 = Y2_GPIO_NUM;\n  config.pin_d1 = Y3_GPIO_NUM;\n  config.pin_d2 = Y4_GPIO_NUM;\n  config.pin_d3 = Y5_GPIO_NUM;\n  config.pin_d4 = Y6_GPIO_NUM;\n  config.pin_d5 = Y7_GPIO_NUM;\n  config.pin_d6 = Y8_GPIO_NUM;\n  config.pin_d7 = Y9_GPIO_NUM;\n  config.pin_xclk = XCLK_GPIO_NUM;\n  config.pin_pclk = PCLK_GPIO_NUM;\n  config.pin_vsync = VSYNC_GPIO_NUM;\n  config.pin_href = HREF_GPIO_NUM;\n  config.pin_sscb_sda = SIOD_GPIO_NUM;\n  config.pin_sscb_scl = SIOC_GPIO_NUM;\n  config.pin_pwdn = PWDN_GPIO_NUM;\n  config.pin_reset = RESET_GPIO_NUM;\n  config.xclk_freq_hz = 20000000;\n  config.pixel_format = PIXFORMAT_JPEG;\n\n  if(psramFound()){\n    config.frame_size = FRAMESIZE_UXGA;\n    config.jpeg_quality = 10;\n    config.fb_count = 2;\n  } else {\n    config.frame_size = FRAMESIZE_SVGA;\n    config.jpeg_quality = 12;\n    config.fb_count = 1;\n  }\n\n  // Camera init\n  esp_err_t err = esp_camera_init(&config);\n  if (err != ESP_OK) {\n    Serial.printf("Camera init failed with error 0x%x", err);\n    return;\n  }\n  // Wi-Fi connection\n  WiFi.begin(ssid, password);\n  while (WiFi.status() != WL_CONNECTED) {\n    delay(500);\n    Serial.print(".");\n  }\n  Serial.println("");\n  Serial.println("WiFi connected");\n\n  Serial.print("Camera Stream Ready! Go to: http://");\n  Serial.print(WiFi.localIP());\n\n  // Start streaming web server\n  startCameraServer();\n}\n\nvoid loop() {\n  delay(1);\n}\n
\n

Now my question is: How to control GPIO pins on ESP32 Camera, from the python script, so that when an object is detected, ESP32-Cam can activate an alarm system?

\n", "Title": "How to control the state of GPIO pins on ESP32 Camera from an external python script?", "Tags": "|esp32|python|gpio|", "Answer": "

The easy way you can use http request\nwhen you detect something, while running the python app, you send a POST request, this link could help you:

\n

https://stackoverflow.com/questions/10313001/is-it-possible-to-make-post-request-in-flask

\n

in the ESP32 you will receive the request since you have esp_http_server you must add a condition of what to do when you receive the request.

\n" }, { "Id": "6445", "CreationDate": "2022-10-25T22:07:06.243", "Body": "

I'm a newbie building an IOT system comprised of two clients, the first client (Client A) will be a Subscriber and the second client (Client B) will be a Publisher. For the Broker, I intend to use something hosted (e.g. Cedalo).

\n

Right now, I don't have Client B, but it will be some sort of H/W device that reads data from sensors and because Client B does not exist yet, I'm using Client A as both Subscriber and Publisher. Client A is running nodejs on a virtual machine and I have both express and mqtt packages installed.

\n

The express server is up and running, e.g. listening and responding to http requests and in the same index.js file as the express server is the mqtt code that is Publishing and Subscribing to a topic, this is also running, e.g. I can Publish and Subscribe, and data from the Subscribed topic is displayed in the console.

\n

Here is the working code:

\n
let fs = require('fs');\nlet http = require('http');\nlet https = require('https');\nlet mqtt = require('mqtt');\n\nlet express = require('express');\nlet app = express();\nlet http_port = 8080;\n\nlet host = `mqtt://test.mosquitto.org`;\nlet mqttPort = `1883`;\nlet clientId = `domain`;\nlet connectUrl = `${host}:${mqttPort}`;\n\nlet httpServer = http.createServer(app);\n\nlet client = mqtt.connect(connectUrl, {\n    clientId, clientId,\n    clean: true,\n    connectTimeout: 4000,\n    username: `domain`,\n    password: `secret`,\n    reconnectPeriod: 1000,\n});\n\nlet topic = `ambientTemp`;\nclient.on(`connect`, () => {\n    console.log(`Connected to Broker at ${connectUrl}`);\n    client.subscribe([topic], () => {\n        console.log(`Subscribed to topic: ${topic}`);\n    });\n});\n\nclient.on(`error`, (err) => {\n    console.log(`Error: ${err}`);\n});\n\nclient.on(`message`, (topic, payload) => {\n    console.log(`Received Message:`, topic, payload.toString())\n})\n\nclient.on('connect', () => {\n    client.publish(topic, '56 Deg.C', {\n        qos: 0,\n        retain: false }, (error) => {\n            if (error) {\n                console.error(error);\n            }\n        })\n})\n\n// For http\nhttpServer.listen(http_port, () => {\n    console.log(`Listenting on Port:${http_port} Insecure`);\n});\n\n// this is the home route\napp.get('/', function (req, res) {\n    console.log("Hit Detected");\n    res.write('<h1>Hello World</h1>');\n    res.write(`<h4>v4<h4>`);\n    res.end()\n});\n
\n

... here is the output from the browser and terminal:\n\"enter

\n

The problems start when I try to naively combine the "publish step" with the "route step" like this:

\n
let fs = require('fs');\nlet http = require('http');\nlet https = require('https');\nlet mqtt = require('mqtt');\n\nlet express = require('express');\nlet app = express();\nlet http_port = 8080;\n\nlet host = `mqtt://test.mosquitto.org`;\nlet mqttPort = `1883`;\nlet clientId = `domain`;\nlet connectUrl = `${host}:${mqttPort}`;\n\nlet httpServer = http.createServer(app);\n\nlet client = mqtt.connect(connectUrl, {\n    clientId, clientId,\n    clean: true,\n    connectTimeout: 4000,\n    username: `domain`,\n    password: `secret`,\n    reconnectPeriod: 1000,\n});\n\nlet topic = `ambientTemp`;\nclient.on(`connect`, () => {\n    console.log(`Connected to Broker at ${connectUrl}`);\n    client.subscribe([topic], () => {\n        console.log(`Subscribed to topic: ${topic}`);\n    });\n});\n\nclient.on(`error`, (err) => {\n    console.log(`Error: ${err}`);\n});\n\nclient.on(`message`, (topic, payload) => {\n    console.log(`Received Message:`, topic, payload.toString())\n})\n\nclient.on('connect', () => {\n    client.publish(topic, '56 Deg.C', {\n        qos: 0,\n        retain: false }, (error) => {\n            if (error) {\n                console.error(error);\n            }\n        })\n})\n\n// For http\nhttpServer.listen(http_port, () => {\n    console.log(`Listening on Port:${http_port} Insecure`);\n});\n\n// this is the home route\napp.get('/', function (req, res) {\n    console.log("Hit Detected");\n    res.write('<h1>Hello World</h1>');\n    res.write(`<h4>v4<h4>`);\n\n    client.on('connect', () => {                  //Additional code block causing issues\n        client.publish(topic, '56 Deg.C', {       //Additional code block causing issues\n            qos: 0,                               //Additional code block causing issues\n            retain: false }, (error) => {         //Additional code block causing issues\n                if (error) {                      //Additional code block causing issues\n                    console.error(error);         //Additional code block causing issues\n                }                                 //Additional code block causing issues\n            })                                    //Additional code block causing issues\n    })                                            //Additional code block causing issues\n\n    res.end()\n});\n
\n

.... I get this:

\n

\"enter

\n

It doesn't make sense to increase the maximum number of listeners because clearly there is something terribly wrong. This is what I think I know ... that node.js is asynchronous and event driven and that "app.get" is a callback function, as is "client.on" ... I'm struggling to understand why I can't seem to nest these callbacks. I'm missing something fundamental, I think.

\n

It's also not clear to me why the "client.on" callback that is nested within the route does not even get called once, i.e. I have put a console.log immediately prior to it and immediately after it and only the first message is displayed (the code doesn't show this, just to explain what I have tried).

\n

Any guidance would be appreciated.

\n

Thank you,\nTim.

\n", "Title": "MQTT Client: Listeners Exceeded Warning", "Tags": "|mqtt|nodejs|", "Answer": "

The on('connect',...) callback will only ever be called once when the client connects, adding a new event listener for this event every time the get request is handled won't do anything useful and as the error states leaks "connect" event listeners.

\n

If you just want to publish a message on each request, remove the on('connect',...) wrapper and just call client.publish(...)

\n
app.get('/', function (req, res) {\n    console.log("Hit Detected");\n    res.write('<h1>Hello World</h1>');\n    res.write(`<h4>v4<h4>`);\n\n    client.publish(topic, '56 Deg.C', {\n        qos: 0,\n        retain: false \n    }, (error) => {\n        if (error) {\n             console.error(error);\n        }\n    })\n    \n    res.end()\n});\n
\n" }, { "Id": "6456", "CreationDate": "2022-11-04T15:56:36.780", "Body": "

In my neighbourhood people burn a lot of firewood. I always deactivate my automatic ventilation when I smell it, but I would like to have something better.

\n

Are there sensors to detect this?

\n", "Title": "Detect bad air quality because of burned firewood", "Tags": "|smart-home|sensors|", "Answer": "

You could try MQ135 and tune it to detect the smoke. It has a potentiometer that changes its sensitivity to CO2. Move it around and tune it to trigger when certain smoke density is present.\nyou could use it with esp8266 or esp32 as it's cheap, performant and resilient of course if you buy from a reliable source. that's your project there

\n

ESP8266/ESP32\nMQ135\nSome wires or PCB if you can\n5v source / old phone charger and you are good to go\nyou could find ready to use code example

\n" }, { "Id": "6460", "CreationDate": "2022-11-07T15:22:03.740", "Body": "

I am using Mqtt to build a project. I am wanting to move my whole work to prod ready.\nMy hardware is ESP-07 and custom PCB design the broker is mosquitto the app (Nodejs,react native)

\n\n

I am struggling to make an infrastructure so that many of users can use my device on same broker without having access to other people devices. I know access control but I am asking if there an automation plugin

\n

Is there any other elements I should be aware of to launch my product to the market ?

\n", "Title": "Mqtt in production", "Tags": "|mqtt|security|esp8266|mosquitto|tls|", "Answer": "

HiveMQ might be one of possible production ready MQTT brokers. And they also have a community edition but may have some limitations.

\n" }, { "Id": "6463", "CreationDate": "2022-11-08T18:20:53.097", "Body": "

So I've been looking into NAT traversing for my IoT project, and I'm a little confused. Here's how it goes:

\n

When I go out to ask google "what is my public ip address?", I get an address like 46.114.190.96. This is, as far as I understand, my wifi router's IP address, since it serves as the public "portal" between my LAN and the public internet. Lets call this "the public IP".

\n

The router also have a local IP address, 192.168.0.1, which I can use to log in to my router and do some settings. I'm setting up Virtual Server (for port forwarding, as I'd like to access my home server from outside the LAN). I've set it up so that it forwards traffic coming in to my local server, which for testing I hosted a simple web page. So far so good.

\n

Things get confusing when I actually try to access the server, from outside the home network. If I type in 46.114.190.96, the page doesn't load. I've set the ports and everything up correctly.

\n

BUT when I log into my wifi router, on the landing page it gives me another IP: 10.114.160.96. Lets call this one the "router IP"

\n

and when I type this address in, voila, the page loads.

\n

Even weirder, some of my friends typed the "router IP" in and can also access the page, and some other friends cannot (their browser just keeps trying to load until timeout). I've checked that the "router IP" is still the same. It does change once every now and then though.

\n

So my question is:

\n\n", "Title": "Confused: What is this weird IP address?", "Tags": "|routers|ip-address|", "Answer": "

This sounds like your ISP is operating CGNAT (Carrier Grad Network Address Translation).

\n

Most home networks normally operate with NAT (Network Address Translation) which means that all the devices on the home network get given an IP address out of a RFC1918 address range (in this case 192.168.0.x) and then the router will remap this to the external IP address given your router by the ISP. This IP address may change over time (usually does unless you are paying for a static IP address), but if you know this external address you can setup port forwarding and access devices inside your network.

\n

Now when an ISP deploys CGNAT they no longer hand out a publicly routed IP address to your router, but they treat all the routers on their network like a home LAN and given them an address again out of the RFC1918 ranges (in this case a 10.x.x.x address, but they probably should be using 100.64.x.x-100.127.x.x) and then do translation to a much smaller range of public IP addresses at the edge of their network.

\n

This basically means that the ISP can VASTLY reduce the number of publicly routeable IP addresses they need (which are very expensive now the only way to get them is to buy them from existing owners rather than getting them assigned from RIPE).

\n

But it also means that you won't be able to use port forwarding, because it would need to be setup on both your router and the ISPs edge router (which isn't going to happen).

\n

Your choices are as follows:

\n
    \n
  1. Move to a new ISP that isn't using CGNAT (this will get harder and harder to do)
  2. \n
  3. Use something like ngrok to setup an outbound tunnel that can be used to access servers
  4. \n
\n" }, { "Id": "6513", "CreationDate": "2022-12-26T09:12:13.987", "Body": "

I've been using a few of the device of the same model for several months. While they've been working great, one of them recently started showing strange behavior that I cannot find defined in the user manual. See the video I took and uploaded on my Google Drive.

\n\n

Anyone has ideas of what's going on? I want to see if it's malfunctioning (so that I should ask the manufacturer's support).

\n\n", "Title": "Aeotec Trisensor undefined behavior", "Tags": "|sensors|", "Answer": "

Thanks @jsotola for patting my shoulder to talk to the manufacturer, who's also confirming (final confirmation going on) that this is malfunctioning.

\n" }, { "Id": "6537", "CreationDate": "2023-01-16T00:30:26.300", "Body": "

I wrote a TCP client in C and a TCP server in Python. The client runs on a ESP32S2 board while the server runs on my PC (virtual Linux OS) and both the board and PC are connected to the same Wi-Fi. However, even though the same client code works as expected on my PC, it is not working when the code is loaded into the ESP32S2. The connect() function returns errno 113. I was wondering what could be the underlying issues.

\n

Here is the client code (code handling Wi-Fi connection is omitted for simplicity):

\n
#define SERVER_IP   AF_INET           \n#define SERVER_ADDR "192.168.1.157" \n#define SERVER_PORT 5566              \n\nstatic int client_fd;\n\nstatic void client_init(struct sockaddr_in addr) {\n    client_fd = socket(SERVER_IP, SOCK_STREAM, 0);\n\n    addr.sin_family = SERVER_IP;\n    addr.sin_port   = htons(SERVER_PORT);\n    \n    if (inet_pton(SERVER_IP, SERVER_ADDR, &(addr.sin_addr)) < 0) {\n        ESP_LOGE(TAG, "Invalid address or protocol (errno: %d)", errno);\n    }\n\n    bzero(&(addr.sin_zero), 8);\n\n    if (connect(client_fd, (struct sockaddr *)(&addr), sizeof(addr)) == -1) {\n        ESP_LOGE(TAG, "connect error (errno: %d)", errno);\n    }\n}\n\nvoid app_main() {\n    wifi_sta_init();\n    \n    struct sockaddr_in server_addr;\n    client_init(server_addr);\n}\n
\n

Here is the server code:

\n
import socket\nimport sys\n\nserver_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\nserver_addr = ('0.0.0.0', 5566)\nserver_sock.bind(server_addr)\n\nserver_sock.listen(1)\n\nwhile True:\n    print("waiting for a connection")\n    connection, client_addr = server_sock.accept()\n\n    try:\n        print(f"connection from {client_addr}")\n\n        while True:\n            data = connection.recv(1024)\n\n            with open('test.txt', 'wb') as file:\n                file.write(data)\n\n            print(f"received {data}")\n            if data:\n                break\n            else:\n                print("no more data from {client_address}")\n                break\n            \n    finally:\n        connection.close()\n
\n

Update:\nThe issue is resolved after I set the network adapter setting of the virtual machine to bridged.

\n", "Title": "Failed to connect to TCP server on ESP32", "Tags": "|networking|esp32|ip-address|", "Answer": "

The issue is resolved after I set the network adapter of the virtual machine to bridged.

\n" }, { "Id": "6541", "CreationDate": "2023-01-21T02:16:35.333", "Body": "

Trying to get an understanding of the cloud side compute power comparison (and cost) for say 10,000 devices sending data to the cloud using HTTPS Vs MQTT.

\n

My co-worker says keeping 10k MQTT connections alive would eclipse the cost of having the same devices HTTPS POST. My intuition says it's probably the same (since you'll always need the power to support all connections even when using HTTPS), and the overhead of building those connections every time.

\n

Does anyone have any experience comparing? Or would anyone have anecdotal experience?

\n

Appreciate the help!

\n", "Title": "compute power for MQTT vs HTTP POST in cloud", "Tags": "|mqtt|data-transfer|https|cloud-computing|", "Answer": "

An idle connection will use very, very little power. Just need to handle a few keep alive packets now and then, and possible slightly larger CPU overhead to correctly map incoming packets to the right connection (both at the OS network stack level and in your app), but that should be negligible.

\n

Keeping tens of thousands or hundreds of thousands of connections active on a single server is not a problem if you use the right tools (I.e. you don\u2019t have an Apache proxy on the path, for instance).

\n

Establishing and HTTPS connection, on the other hand, is extremely expensive, both in terms of CPU and network traffic. You first need to establish a TCP connection, then TLS inside that, then HTTP inside that, probably also adding authentication and any application-level handshake on top of that.

\n

That\u2019s the whole reason there are so many mechanisms to avoid doing it again (connection keep alive, to serve several HTTP requests within a single TCP+TLS connection), or to speed up new negotiations (with caching, but that is limited in time).

\n

Also, keeping the connection established has one huge advantage: you can send data from server to client at any time, while with individual connections you need to either perform polling (nooooooo) or long polling to do that, which gives the same result as keeping the connection alive, but with higher overhead.

\n

Unless there\u2019s something specific in your setup that would prevent it, go for the permanent connections.

\n" }, { "Id": "6545", "CreationDate": "2023-01-25T20:20:05.963", "Body": "

I have five Chromecast Audio devices which produce 3.5mm/optical audio output from one source. I also have a 1st gen Chrome TV with HDMI audio [the "keyhole" shape] which cannot play audio streams shared with Chromecast audio. It will support Spotify, but not as part of the five Chomecast Audio devices in an audio group.

\n

Can a 4th generation Chomecast (product name "Chromecast with Google TV", either HD or 4K) play a Chromecast audio stream shared with Chromecast Audio devices?

\n", "Title": "Can a 4th Gen Chromecast (\"Chromecast with Google TV\") work in a group with Chromecast Audio?", "Tags": "|chromecast|", "Answer": "

I've not actually added one, but the Home app implies so.

\n

Both Chromecasts listed are the new HD with Google TV models and the old key hole shaped model is not listed as an option

\n

\"enter

\n" }, { "Id": "6574", "CreationDate": "2023-02-18T20:27:08.323", "Body": "

I'm considering buying the Sure Petcare Microchip Cat Flap Connect.

\n

\"pet\"pet

\n

I like the idea of being able to set a schedule for when my cat is allowed to exit the house and also to control it via the internet when I want to change the schedule temporarily.

\n

It says "You\u2019ll need [to also buy the $85] Hub to use the Sure Petcare App."

\n

The app gets terrible reviews. So I likely don't want to use it if I can avoid it.

\n

I'm a software engineer and am new to IOT hacking.

\n

I'm curious whether there would be some way to buy just the cat flap (and not the hub) and write my own software for controlling it (if the cat flap could connect to my wifi network somehow).

\n

Where could I learn how to do this?

\n", "Title": "Hacking wifi control of pet door", "Tags": "|wifi|mobile-applications|", "Answer": "

The work the good people around openhab do seem to be the way to go when using the hub as Jcaron already mentioned.

\n

Weirdly the manual doesn't say which wireless technology it uses, except "Proprietary Wireless Control: 2425 MHz - 2480 MHz". So if you want to forgo the hub you'll need to do some radio sleuthing which likely won't be a short project. As a sidenote, those frequencies go into the range where the you're amplitude limited in the US. So unless you're an expert in radio communications this is unlikely to be a great starter project for IoT hacking.

\n

Either way, if you don't care about using the hub and you just don't want the app openHab is probably the best way to go. Probably with this project.

\n" }, { "Id": "6577", "CreationDate": "2023-02-23T17:57:05.760", "Body": "

I'm looking for information about the Ethernet chip on the Orange Pi 5. I need an SBC that has PTP hardware timestamp support (IEEE 1588). According to OrangePi.org, the Orange Pi 5 has the Motorcomm YT8531C, but I have not been able to find documentation that provides the answer about hardware PTP support. Most likely it does not, but I'd like to confirm on an actual device before purchasing one.

\n

If you have an Orange Pi 5, you can test this with this command:

\n

ls /sys/class/net |xargs -n1 ethtool -T

\n

If there is someone that can post the output of this command, I would appreciate it very much. Sorry for the odd question, this is the best StackExchange site I could find to ask this question. I would ask on the OrangePi forum, but it doesn't run HTTPS, so I'd rather not create an account there.

\n", "Title": "Does Orange Pi 5 have PTP hardware timestamping support?", "Tags": "|ethernet|", "Answer": "

Yes it does.

\n

https://forum.armbian.com/topic/26913-does-orange-pi-5-have-ptp-hardware-timestamping-support/#comment-160778

\n

A kind user on the armbian forum, @royk, ran the ethtool command, and posted the output which shows:

\n
Time stamping parameters for eth0:\nCapabilities:\n        hardware-transmit\n        software-transmit\n        hardware-receive\n        software-receive\n        software-system-clock\n        hardware-raw-clock\nPTP Hardware Clock: 0\nHardware Transmit Timestamp Modes:\n        off\n        on\nHardware Receive Filter Modes:\n        none\n        all\n        ptpv1-l4-event\n        ptpv1-l4-sync\n        ptpv1-l4-delay-req\n        ptpv2-l4-event\n        ptpv2-l4-sync\n        ptpv2-l4-delay-req\n        ptpv2-event\n        ptpv2-sync\n        ptpv2-delay-req \n
\n" }, { "Id": "6592", "CreationDate": "2023-03-18T17:09:46.893", "Body": "

I've set up a MoesGo Smart IR unit with Smart Life for a friend to control a TV. I can now use the virtual remote control on my phone to control the TV which is quite handy when he looses the physical remote.

\n

The Smart Life account was integrated with Google Home a while ago so that all the smart appliances he has can be voice controlled. Now that we have a Smart IR Unit we can use voice commands to:

\n\n

This was just a matter of guessing the commands, and to be honest they were fairly obvious. The problem we've got is that there a lot more we'd like to do with the TV but we are unable to guess the right commands! For a start we've like to:

\n\n

We've tried everything we can think of but Google Home just says that it doesn't understand. I did find a web site that suggested saying "Talk to TV", and Google does appear to understand this is a valid command, but the action it takes is to turn the TV off! (I suspect it's toggling the power button)

\n

Does anybody know what commands we can use?

\n", "Title": "How to Control a TV via Google-Home and Smart Life", "Tags": "|smart-home|google-home|smart-tv|", "Answer": "

To control the specific functions you mentioned, you may need to add custom commands or routines to your Google Home app. Here are some general steps you can take to create custom voice commands:

\n
    \n
  1. Open the Google Home app on your phone or tablet.
  2. \n
  3. Tap on the "+" sign at the top left corner of the screen to create a new routine.
  4. \n
  5. Give your routine a name that you'll remember, like "Watch Netflix."
  6. \n
  7. Under "When," select "Voice Command" and enter the phrase you want\nto use to activate the routine, like "Watch Netflix."
  8. \n
  9. Under "Actions," select "Add Action" and choose "Smart Home."
  10. \n
  11. Select the device you want to control, like the MoesGo Smart IR\nUnit, and choose the function you want to perform, like "Select\nHDMI1" or "Press Netflix Button."
  12. \n
  13. Repeat step 6 for any additional functions you want to include in\nthe routine.
  14. \n
  15. Tap "Save" to save your routine.
  16. \n
\n

Once you've created your routine, you should be able to activate it by saying the voice command you chose, like "Watch Netflix." Google Home should then send the appropriate commands to the MoesGo Smart IR Unit to perform the functions you specified in the routine.

\n

If you don't see the specific functions you want to control in the Google Home app, you may need to consult the documentation for the MoesGo Smart IR Unit or contact their customer support for assistance in adding these functions to your device.

\n" }, { "Id": "6597", "CreationDate": "2023-03-23T16:21:29.340", "Body": "

How do device developers and major platform providers in the world such as Google Home, Apple, Tuya, and Aqara do to control Matter devices remotely??? Do they still use traditional IoT protocols like MQTT?

\n

I tried searching but didn't have much information. Please help me with this question.

\n", "Title": "How do I access my Matter devices when I\u2019m not at home?", "Tags": "|smart-home|mqtt|protocols|cloud-computing|matter|", "Answer": "

Yes, they still need to use "IoT Protocols"

\n

Communication between the vendors cloud services and a Hub (E.g. an Amazon Echo or a Google/Nest Hub) device still uses things like MQTT, it then maps this to a Matter/Zigbee/Thread message to any individual devices.

\n" }, { "Id": "6607", "CreationDate": "2023-04-07T21:06:33.037", "Body": "

I would like to experiment with a platform that can run a virtual assistant (think e.g. https://mycroft.ai/) so to perform tasks like speech-to-text, conversations, text-to-speech, image segmentation and object detection (e.g. when fed with an RTSP stream).

\n

I'm also curious about performing inference with large language models (LLMs), like those from Facebook Research's LLaMA (https://github.com/facebookresearch/llama).

\n

Would a platform like Nvdia Jetson Orin Nano (8GB RAM) support LLMs inference, or would it still be underpowered, e.g. memory-constrained? What would be the specs I should aim for? I would use mainly pytorch as underlying framework.

\n", "Title": "Do Jetson Orin Nano-like systems with 8GB RAM suffice to experiment with large language models?", "Tags": "|machine-learning|", "Answer": "

Yes, sufficient for a 7B model.

\n

You may refer to this link for details: https://nvidia-ai-iot.github.io/jetson-generative-ai-playground/tutorial_text-generation.html

\n

\"enter

\n" }, { "Id": "6608", "CreationDate": "2023-04-07T21:45:14.760", "Body": "

I just took delivery of a trio of ESP32-WROOM-32 dev boards and am unable to get micropython to run on them. I've successfully flashed sketches from the Arduino IDE, but after I flash the latest esp32spiram.bin and try to connect with Thonny, I see an endless loop of

\n
ELF file SHA256: 46bca36b7d6020a6\n\nRebooting...\nets Jul 29 2019 12:21:46\n\nrst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)\nconfigsip: 0, SPIWP:0xee\nclk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00\nmode:DIO, clock div:2\nload:0x3fff0030,len:4540\nho 0 tail 12 room 4\nload:0x40078000,len:12788\nload:0x40080400,len:4176\nentry 0x40080680\nE (650) psram: PSRAM ID read error: 0xffffffff\nE (651) spiram: SPI RAM enabled but initialization failed. Bailing out.\nE (652) spiram: SPI RAM not initialized\nRe-enable cpu cache.\n\nabort() was called at PC 0x400d3ea6 on core 0\n\nBacktrace:0x400964a9:0x3ffe3b60 0x40096a99:0x3ffe3b80 0x4009a355:0x3ffe3ba0 0x400d3ea6:0x3ffe3c10 0x400d3ef7:0x3ffe3c30 0x400825ee:0x3ffe3c50 0x40081e1b:0x3ffe3c70 0x40078f5c:0x3ffe3c90 |<-CORRUPTED\n
\n

Can anyone suggest what I might be doing wrong?

\n", "Title": "Unable to run micropython on ESP32-WROOM", "Tags": "|esp32|micropython|", "Answer": "

I was facing the same issue, trying various Micropython versions with my ESP32 WROOM chip.

\n

Micropython's download page for the appropriate ESP Firmware makes it clear they support SPIRAM / PSRAM:

\n
\n

This firmware supports configurations with and without SPIRAM (also known as PSRAM) and will auto-detect a connected SPIRAM chip at startup and allocate the MicroPython heap accordingly. However if your board has Octal SPIRAM, then use the "spiram-oct" variant.

\n
\n

Only after looking at the ESP32-S3-WROOM Datasheet, I realized that there are packages without PSRAM, and that the one sitting on my desk was one of them (ESP-S3-WROOM-1-N16).

\n

\"ESP32-S3-WROOM-1

\n

After erasing and re-flashing the Firmware, I am now able to use Micropython on the ESP32 with the 512KB of internal RAM (the warnings still appear, however).

\n" }, { "Id": "6616", "CreationDate": "2023-04-16T14:44:52.810", "Body": "

I'm trying to connect to an ESP32-WROOM-32 in AP mode, using the following micropython code:

\n
import network\n\nssidAP         = 'WiFi_ESP32' #Enter the router name\npasswordAP     = '12345678'  #Enter the router password\n\nlocal_IP       = '192.168.1.10'\ngateway        = '192.168.1.1'\nsubnet         = '255.255.255.0'\ndns            = '8.8.8.8'\n\nap_if = network.WLAN(network.AP_IF)\n\ndef AP_Setup(ssidAP,passwordAP):\n    ap_if.ifconfig([local_IP,gateway,subnet,dns])\n    print("Setting soft-AP  ... ")\n    ap_if.active(True)\n    ap_if.config(essid=ssidAP,authmode=network.AUTH_WPA_WPA2_PSK, password=passwordAP)\n    print('Success, IP address:', ap_if.ifconfig())\n    print("Setup End\\n")\n\ntry:\n    AP_Setup(ssidAP,passwordAP)\nexcept:\n    print("Failed, please disconnect the power and restart the operation.")\n    ap_if.disconnect()\n
\n

This code is a direct copy from the FreeNove ESP32 tutorial.

\n

The device shows up on my phone as a WiFi AP, but when I try to connect, my phone spins for a minute and then reports "Couldn't obtain IP address". I've tried this with two different phones (both Android) and two different boards (the other is an ESP32-WROVER-E), and I've also confirmed that it does work when running the equivalent Arduino (C++) sketch.

\n

What am I doing wrong? I'm running MicroPython v1.19.1 firmware.

\n", "Title": "Can't Connect to ESP32 in AP Mode", "Tags": "|wifi|esp32|micropython|", "Answer": "

As of MicroPython 1.20.0 (2023-04-26) the above code works correctly.

\n" }, { "Id": "6630", "CreationDate": "2023-05-05T07:53:03.690", "Body": "

I'm quite new to IoT in general.

\n

I have a Dragino LPS8v2 gateway, and I would like to create a private LoRaWAN network for my end device (TTGO T-Beam).

\n

What I did :

\n\n

\"Gateway's

\n\n

\"Link

\n

What I expect :

\n

I expect my end device to connect to the Dragino gateway, and therefore use the built-in LoRaWAN server.

\n

What is happening :

\n\n

For context, the Dragino GW is next to the end device (<1 meter, because I'm trying to set it up and debugging). The other GW is 5-10 meters away.

\n

Also, when checking the Dragino's traffic, the DEVEUI seen is the one of my end device, used for an application on The Things Network.

\n

I have a few questions:

\n
    \n
  1. Why is the end device not connecting to the Dragino GW ? Is it because it is too near? Or because the other GW responded faster?
  2. \n
  3. Is it correct to choose as primary LoRaWAN server the The Things Network V3, even though I want to use the built-in LoRaWAN server? And if so, why?
  4. \n
  5. When using a built-in LoRaWAN server, the GW shouldn't be connected to The Things Network, is that right?
  6. \n
  7. Should I add an application in The Things Stack, as well as an end device? I already created an app and registered the end device on The Things Network...
  8. \n
\n

Thanks in advance.

\n", "Title": "Gateway receives Join Request from end device, but it doesn't accept", "Tags": "|lorawan|", "Answer": "

If anyone is facing the same problem, here's what I did to make it work:

\n\n

And it worked. The reason why there was no Join Accept is because the end device had the wrong keys (AppKey, DEVEUI, etc), and the primary LoRaWAN server was not correctly configured.

\n

Check this post : #6034 LoRaWAN device not receiving Join Accept

\n

Hope this can help!

\n" }, { "Id": "6655", "CreationDate": "2023-06-24T14:11:41.527", "Body": "

I was recently given a trail camera, "wildlife camera." It works great, but the mobile app to connect and see the pictures/videos is very janky and hard to use. I write software for a living, so building something that can read files from a server is the easy part. What I'm trying to figure out is how to connect to the camera wirelessly. (I mostly figured that out while writing this post)

\n

Here's what I can discern so far.

\n
    \n
  1. The app appears to connect to bluetooth first.
  2. \n
  3. It then prompts "Join network TC08-...."
  4. \n
  5. I tried joining the wireless network directly from MacOS while connected to the app, it prompted me with the "hand-off" feature where you can share a password across devices. When I went and checked what password was shared it was, wait for it, 12345678.
  6. \n
  7. The network is only allows one device at a time (is that a thing?), because if I get the computer connected, the ios app can't connect.
  8. \n
  9. The wifi network only boots up when I connect to bluetooth, I assume this is a power-saving feature. I don't see the camera on my list of available bluetooth apps in MacOS.
  10. \n
  11. Once connected I used nmap to find the open ip/ports. Here's the list: 80, 443, 3333, 8192, 8193.
  12. \n
  13. I can navigate through the browser to the ip, and get download links for all the photos and videos, it's basically a very simple webpage: /DCIM/MOVIE is where the videos are stored.
  14. \n
\n

So this is very close to my goal, however, I don't want to download each video file. The app has access to thumbnails of the video files, but I'm not sure where they are stored. I wonder if I could connect the two devices if I could use wireshark to view what paths are requested from the app?

\n

The two parts I'd like to nail down:

\n
    \n
  1. connect on bluetooth from the computer, so I can trigger the wifi to start.
  2. \n
  3. find the thumbnails so I don't have to download each video file.
  4. \n
\n", "Title": "How to inspect trail camera network", "Tags": "|networking|wifi|", "Answer": "

Most definitely not a complete answer as we lack a lot of info and this is going to be a lot of trial and error to attempt to get something, but a few pointers:

\n\n

Are you sure the \u201chand-off\u201d was from the device? My guess is that it\u2019s more likely to be from your phone (and the phone got it from the app, there are APIs to automatically connect to WiFi networks).

\n

As for the thumbnails, either they are somewhere in the file tree as well or there is some webservice to get them (as well as a list of videos etc.). If nothing seems to be available directly in the file hierarchy, what you could try is to have your Mac between the device and your phone while it is connecting. The details will vary depending on a number of details (including whether it uses an IP or a domain name, whether it uses TLS and actually checks certificates or not\u2026).

\n

It\u2019s going to be a bit convoluted but you could share the connection to the device with another network (you\u2019ll probably need to share to Ethernet and then have an AP for the iPhone to connect to), and start by observing using Wireshark what the phone sends (note that you\u2019ll have to somehow separate what is actually from the app from the rest).

\n

One other thing you could look into is to activate developer mode on your phone, then use the Console app (not Terminal, Console) to view the logs on your phone. Sometimes apps log lots of useful information.

\n" }, { "Id": "6664", "CreationDate": "2023-07-11T02:38:59.303", "Body": "

I've brainstormed an idea for an IoT hobby project but don't know exactly where to start in terms of the implementation. Here's the idea. 10 of my friends who live <.5 mile radius of my house like to get together often. I want to give all my friends each a small clicker ( or the like) to report to me whether they will be able to attend an outing. So for instance, on Sunday morning each friend will 'click' to inform me whether they will come later that afternoon. What is the most appropriate protocol to use? What type of hardware will I need?

\n", "Title": "How to build a roll call IoT app?", "Tags": "|protocols|", "Answer": "

This is a very broad question with lots of possible answers, but here are three possible approaches:

\n\n

There are other alternatives involving a board with a cellular modem instead, and some would probably suggest WiFi HaLow or other radio protocols.

\n

There are of course a lot of details and caveats in each option, but this should get you started until you come up with more specific questions.

\n" }, { "Id": "6670", "CreationDate": "2023-07-14T19:41:47.000", "Body": "

When doing fleet registration of IoT devices on AWS' IoT, what is the difference between using\nCreateCertificateFromCsr and CreateKeysAndCertificate? They seem similar enough to where both are not needed, so why does AWS provide both?

\n", "Title": "What is the difference between using CreateCertificateFromCsr and CreateKeysAndCertificate?", "Tags": "|aws-iot|aws|", "Answer": "

In TLS crypto there are a few different objects which are related to each other.

\n

The two basic objects are the private and public keys, which are generated at the same time, and are linked to each other (the private key allows one to decipher data encrypted with the public key, the public key allows one to verify a signature generated with the private key).

\n

The private key should remain secret, and it is best if it actually never moves away from the device where it was generated. The public key is public and can be shared with anyone.

\n

So the best course of operation is to generate the private/public key pair on the device where the private key will be used, and then only communicate the public key and objects derived from it.

\n

The main type of object derived from it is a certificate. A certificate contains the public key, but also identifying information, and it is signed by a certification authority (CA) which vouches for that identifying information.

\n

To get a certificate from the CA, one creates a \u201ccertificate signing request\u201d (CSR). One needs the private/public key pair to generate it, and adds the identifying info into the CSR (but since it has not been signed by the CA, that information is not trusted yet).

\n

So the \u201cright\u201d way to do things is as follows:

\n\n

This is what CreateCertificateFromCsr lets you do. For this you need to generate the private/public key pair, generate the CSR (with the right data), send the CSR using this call, and you get back the certificate.

\n

Now some people are in comfortable doing all this, and they prefer something simpler. And that\u2019s where CreateKeysAndCertificate comes in. You don\u2019t need any input, they will do everything for you: generate the key pair, the CSR, sign it. In the end they will return the private key and certificate, which is all you need.

\n

The difference is that the private key will have been generated on their side and is transmitted over the network, which means someone, either at their end or in between, could try to intercept it, and then act as if they were you. In this scenario this is unlikely, so you can probably save yourself the hassle and use the second one.

\n" }, { "Id": "6700", "CreationDate": "2023-08-27T02:32:17.273", "Body": "

I'm in the process of creating a Raspberry Pi-powered home automation system. I have a complicated node.js script running on the Pi, and I want to be able to send commands from the Pi to my Alexa (e.g. turn on lights, say something, etc). I've looked into AVS (Amazon Voice Service) but I'd have to convert text to an audio command for that, and I was wondering if there's a simpler way. Does anybody have experience in this type of thing?

\n", "Title": "Send Alexa a command from another device?", "Tags": "|raspberry-pi|alexa|nodejs|", "Answer": "

Your answer lies in going the IFTTT route to control your device.\nYou can then control it from Alexa as well as other sources of control such as your javascript file.
\nHere are some Alexa IFTTT integrations to get you started.
\nhttps://ifttt.com/amazon_alexa

\n" }, { "Id": "6702", "CreationDate": "2023-08-30T16:29:54.840", "Body": "

I am fairly new to IoT and to all kinds of Arduino chips/sensors etc.. I am trying to understand the mechanism of LoRa device communication. With the help of the official Arduino web page I found the following code:

\n
void setup() {\n  Serial.begin(9600);\n  while(!Serial);\n\n  Serial.println("LoRa Sender");\n\n  if (!LoRa.begin(868E6)) {\n    Serial.println("Starting LoRa failed!");\n    while (1);\n  }\n}\n\n
\n

As far as I understand LoRa.begin(868E6) sets the frequency of transmission of the device. The code continues with the loop function:

\n
void loop() {\n  Serial.print("Sending packet: ");\n  Serial.println(counter);\n\n  // send the data to all \n  // possible consumers ? \n  LoRa.beginPacket();\n  LoRa.print("hello ");\n  LoRa.print(counter);\n  LoRa.endPacket();\n\n  counter++;\n\n  delay(5000);\n}\n
\n

Now if I am not mistaken every other LoRa device within the range of the signal that is also set up to the same frequency (868E6) can receive the data produced by the transmission device. Is my understanding correct? And if yes then how can we prevent unwanted devices to interfere with our system/setup? Should we use some kind of encryption? Maybe certificates?

\n

Thanks in advance to anyone that can help!

\n", "Title": "Arduino LoRa communication protocol", "Tags": "|arduino|lora|lorawan|", "Answer": "

LoRa is the "raw" radio modulation. You create a packet, it sends it as is, with only the LoRa modulation, no additional data (addresses, encryption, etc.).

\n

Most people will use LoRaWAN rather than just raw LoRa. LoRaWAN uses the LoRa modulation, but adds quite a few things on top of that, including addressing, encryption, optional acknowledgments and confirmed packets, join requests, handling on multiple channels and data rates (including changes to those), etc.

\n

LoRaWAN however relies on a "hub and spoke" or "star" topology: there's a gateway (or there could be several), and devices talk to the gateway, but not to each other. Gateways are connected to an LNS (LoRa Network Server), which in turn talks to servers on the Internet (usually the goal of LoRa devices is to report data to a server somewhere on the Internet, and sometimes receive a little bit of information like configuration from the same).

\n

LoRaWAN gateways are more complex than end-devices because they have to listen on multiple channels and multiple data rates at the same time, and then need to communicate with an LNS.

\n

One option if you are covered is to use an existing LoRaWAN network (the most ubiquitous is The Things Network, but there are others). Then you register your LoRaWAN end device and your application server on that network, and you can set up your device to join the network and then whatever it sends should end up on your server. Note that you will use different APIs, as you need a LoRaWAN implementation, not just raw LoRa.

\n

Alternatively, you can have your own gateway, and register it on an existing network. Or have you own gateway and your own LNS.

\n

If you don't want LoRaWAN for whatever reason but want to stick to pure device-to-device communication and you need more, then you will need to implement those additional features yourself.

\n

There have also been implementation of mesh protocols over LoRa, though I'm not familiar with the details.

\n" }, { "Id": "6708", "CreationDate": "2023-09-16T18:24:04.743", "Body": "

I'm developing an IoT product's software, where it runs HTTPS client.

\n

Given that the certificate of servers are renewed every year, it's not that issue while we're authenticating against the Intermediate Certificate (as Let's Encrypt).
\nGiven those certificates have also a lifetime, so a solution might be to select a CA which have a long lifetime [Longer than the product lifetime (perhaps ~10 years)].

\n

Given this suggested solution, what are recommended CAs for meeting this criteria?

\n

Alternatively, is it a workable approach to update the stored CA certificate before changing the server's certificate by 3 months of the expiration of both CA and the server's certificates?

\n

About limitations: could the server hold two different certificates within this update period?

\n", "Title": "TLS Certificate life for IoT System", "Tags": "|https|tls|", "Answer": "

When a TLS connection is established, the server will present a certificate, and possibly intermediate certificates.

\n

Your device should then decide if it trusts the presented certificate. It does so by having a trust store, where it holds certificates it\u2026 trusts.

\n

A certificate presented by the sever may be trusted if:

\n\n

In the latter case, the device will check who signed the certificate (it will give the identity of another certificate). It will try to find that certificate, either in the list of certificates presented by the server, or in the trust store.

\n

If it doesn\u2019t find the signing certificate, then the original certificate cannot be trusted.

\n

Then it will check that the signature on the original certificate matches the signing certificate. It it doesn\u2019t, again, the certificate won\u2019t be trusted.

\n

[skipping a number of other checks such as validity dates, hash types and sizes, etc.]

\n

Now we know that the original certificate (A) was signed by another certificate (B). If B was in the trust store (more specifically, trusted to sign other certificates), then that\u2019s it, since A is signed by B and we trust B, then we trust A.

\n

If B was not in the trust store but in the list of certificates presented by the server (an intermediate cert), then we start the process again, trying to find who signed this cert, whether we can trust it, etc.

\n

There are different ways of establishing what certificates your server will trust.

\n

The normal way in a public PKI, like that used to check domain names in browsers, is to have a (looooong) list of \u201croot certificates\u201d in the trust store. Each vendor, independently and/or in cooperation with the CA/Browser forum will decide which Certification Authorities (CAs) they trust, and thus which root certificates to include.

\n

That list is frequently updated, with new root certs added, old ones replaced, and sometimes certs removed altogether (when the device vendor and/or industry consider that the CA is not trustworthy and has issued certificates incorrectly, or is at risk of doing so).

\n

In some cases the private key of a cert may be leaked. In such cases the associated certificate can no longer be trusted.

\n

Also remember that when you renew your certificate, there is absolutely no guarantee that it will still be signed by the same chain of certificates, or even the same root.

\n

So the public PKI is a living thing. You just can\u2019t rely on a small and fixed list of root certs if you use it.

\n

If your device has the ability to hold a full certificate store, you should have that, and include a way to keep it updated. You can for instance find curl\u2019s CA list, extracted from Mozilla, here.

\n

The alternative is not to use the public PKI, but to have your own CA and/or certs.

\n

Some people will just store the server cert directly in the device. Others will set up they own CA, store the root cert of the CA in the device, and issue certificates signed by that CA for their server.

\n

In any case, it\u2019s still a very good idea to be able to:

\n\n

The mechanism will be exactly identical to the public PKI case. The differences will be that:

\n\n

The fact that your server is also accessible by browsers is not an issue: just point two different names to your server, and use different certificates, one from the public PKI and another from your own. Use the domain with the public PKI cert for generic users, the other for your devices.

\n

However this would fail if your devices end up behind a proxy or firewall (explicit or transparent).

\n" }, { "Id": "6731", "CreationDate": "2023-10-14T11:15:27.163", "Body": "

Trying to design some secure firmware here and running into a brick wall regarding the use of crypto-chips. We are considering using this one here: https://www.microchip.com/en-us/product/ATECC608B. All data on our RTL8720DN 2.4/5GHz chip is encrypted/decrypted using AES256 which is a big deal because we encrypt and save the customers home WiFi password on the device EEPROM. The issue that we are concerned about is that the IV and CBC keys are exposed in our program itself. The crypto-chip fixes that by coming pre-configured with the keys and, obviously, they are securely locked so that no one can get them out. Whenever we need to encrypt/decrypt something we just make a quick call to the crypto-chip to get the secure keys and we are good to go.

\n

We are already using SSL/TLS communications through the customers home WiFi to our cloud servers, etc. so don't need the crypto-chip to help us with certificates.

\n

Billion dollar question here: if a hacker breaks into a customers home and steals our device, can't they just decompile our code or even just flash their own code onto the device and "programmatically" extract the AES keys from the crypto-chip? That's basically what we do when we update our firmware via OTA. If that's the case, what's the difference between just leaving the keys in plain text in our program?

\n", "Title": "If hackers get ahold of your physical IoT device, what's the purpose of using secure/crypto-chips?", "Tags": "|arduino|cryptography|", "Answer": "

If a hacker gets hold of your hardware, it is just a question of time and resources that the hacker has, before they can get your device key or certificate. And then, all data that is stored on the device, unless it is one-way encrypted, with no keys on the device to decrypt.
\nSo, you will not be able to really do much about that wifi password, except to spend a bit of time/resources of the hacker. In fact, such vectors are a threat to home security and hence, many suggest that home owners take the trouble to put all the devices on the guest network. It is painful, but gets a degree of security.
\nBut you have the device also uploading data to you after it gets on the wifi.\nThe important thing is that if they do get that one device, they cannot then get access to or spoof your other devices using that key/certificate.
\nSo, you will have to create device specific keys. They can be derived from some master key or be random. In case of certificates, they can (and should) have the same chain of authority but be individual device specific certificates.
\nIn the case of IoT, there are provisions in the certificate, to put in a device serial number and then set policies at AWS so that a certificate cannot spoof another device serial number.
\nThe hacker will probably be able to spoof the same device and upload data to your cloud to try to cause a DOS (or DDoS, depending on your security policies) attack or cause an upload of unreasonable amounts of valid-looking data and try to poison aggregate reports, etc.
\nYou will have to consider those threats as well.
\nBut, device specific keys are a start.

\n" }, { "Id": "6733", "CreationDate": "2023-10-15T12:23:40.420", "Body": "

I have a simple iot app:

\n
raspberry pi 4\naeotec Z-stick7\naeotec a\u00ebrQ temp/humidity sensor\nmosquitto\nzwave-js-ui\n
\n

The setup works fine, yet state changes are not being reported so often.

\n

Yesterday evening, circa 23:00, the sensor reported 22.7 degrees to zwave-js-ui. This morning, circa 08:00, 19.2 degrees. No reports in between.

\n

These measurements are accurate, however, the temperature will have fallen gradually during the night (the sensor is placed near a window that was opened partially at around 23:00 and the outdoor temperature sank till circa 8 degrees last night).

\n

The log in the zwave-js-ui webapp registers no events between 23:00 yesterday and 08:00 today.

\n

I'm guessing I need to tweek the parameter [3-112-0-1] Temperature Change Report Threshold?, which appears under Control Panel -> Node -> Configuration v4

\n

\"screenshot

\n

But to what?

\n

I'd like reports at least every degree of change.

\n

It's worth pointing out that I get reports if I quick push the button on the sensor.

\n

I found this post https://github.com/zwave-js/node-zwave-js/issues/2595, suggesting that this was an issue on older versions, called zwavejs2mqtt. The issue was unresolved, closed as stale.

\n", "Title": "aeotec a\u00ebrQ temp/rh sensor or zwave-js not reporting state change", "Tags": "|raspberry-pi|zwave|", "Answer": "

Perhaps there is an error in the software, and the resolution is not 0.1, but it is 1.0.

\n

Setting the threshold to 1 may reveal such an error.

\n" }, { "Id": "6744", "CreationDate": "2023-10-29T14:18:17.437", "Body": "

I have a setup where I am running a simple mosquito server for MQTT. I then run the Z-Wave JS ui project to interact with my zwave devices. This works great and I see the messages showing up in MQTT for example here is me turning on then back off the light...

\n
Topic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876065,"value":1698588451955}\n2023-10-29 10:14:36:043\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876065,"value":1698588451955}\n2023-10-29 10:14:36:043\n\nTopic: zwave/Bedroom_Light/37/0/targetValueQoS: 0\n{"time":1698588876077,"value":true}\n2023-10-29 10:14:36:053\n\nTopic: zwave/Bedroom_Light/37/0/targetValueQoS: 0\n{"time":1698588876077,"value":true}\n2023-10-29 10:14:36:054\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876079,"value":true}\n2023-10-29 10:14:36:054\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876079,"value":true}\n2023-10-29 10:14:36:055\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876094,"value":true}\n2023-10-29 10:14:36:068\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876094,"value":true}\n2023-10-29 10:14:36:068\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876316,"value":1698588876093}\n2023-10-29 10:14:36:292\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876316,"value":1698588876093}\n2023-10-29 10:14:36:293\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876494,"value":1698588876093}\n2023-10-29 10:14:36:467\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876494,"value":1698588876093}\n2023-10-29 10:14:36:468\n\nTopic: zwave/Bedroom_Light/37/0/targetValueQoS: 0\n{"time":1698588876510,"value":false}\n2023-10-29 10:14:36:485\n\nTopic: zwave/Bedroom_Light/37/0/targetValueQoS: 0\n{"time":1698588876510,"value":false}\n2023-10-29 10:14:36:485\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876511,"value":false}\n2023-10-29 10:14:36:485\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876511,"value":false}\n2023-10-29 10:14:36:486\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876526,"value":false}\n2023-10-29 10:14:36:502\n\nTopic: zwave/Bedroom_Light/37/0/currentValueQoS: 0\n{"time":1698588876526,"value":false}\n2023-10-29 10:14:36:503\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876743,"value":1698588876526}\n2023-10-29 10:14:36:717\n\nTopic: zwave/Bedroom_Light/lastActiveQoS: 0\n{"time":1698588876743,"value":1698588876526}\n
\n

Now with this working I would like to publish a message to turn on the light instead of using the web ui. To do this I try to publish the message I saw...

\n
zwave/Bedroom_Light/37/0/targetValue\n{"time":1698588451956,"value":true}\n
\n

But nothing happens and I don't see the current value message to confirm it was changing. What am I missing how do I get the ZWave device to toggle via MQTT?

\n

After playing a bit it seems to me like Z-Wave-JS-UI isn't properly handling the requests from MQTT. It seems to be publishing to it just fine. Still not sure what I am missing.

\n", "Title": "How do I control a device that works with Z-Wave JS UI via a published MQTT message?", "Tags": "|mqtt|zwave|", "Answer": "

So I had to read back through the documentation, however, it appears there is a /set that needs to be added to the topic path so instead of zwave/Bedroom/Bedroom_Light/37/0/targetValue is should be zwave/Bedroom/Bedroom_Light/37/0/targetValue/set. Also I could leave out the timestamp and just send this...

\n
{"value":true}\n
\n

Now it works fine.

\n" }, { "Id": "6748", "CreationDate": "2023-11-02T14:53:31.810", "Body": "

EDIT: when mentioning "cards" in the question below, I mean the kind of badges/cards you can but at Aliexpress that claim to be "magic" cards (or that ndo not claim anything and then are "read-only" ones)

\n

I am planning to use 13.56 MHz NFC cards to protect a lock. I will write the software part myself (this is not a problem, including the security aspects) but I have a hard time understanding NFC cards (in the context of my project). I will build the reader myself, based on a NodeMCU and an RC522 reader.

\n

I have done some reading and in the chaotic information I found (I must have not looked at the right place), and for the 13.56 MHz cards I have, my understanding is the following:

\n\n

And finally Question 4: is it possible to programmatically query/guess the type of card? (with a NodeMCU + RC522 but I am open to other combinations)

\n

That's a lot of questions, let me know if you want me to split them into 4 different posts.

\n

I have gathered over the years many NFC tags (some rewtitable, some not) and would like to put them to use but I first need to understand their nature.

\n", "Title": "What kind of 13.56 MHz NFC cards are there?", "Tags": "|arduino|nfc|", "Answer": "

I'm not sure where you got your classification, it seems a bit weird to me.

\n

There are dozens of 13.56 MHz RFID card types and sub-types, but some of the most common are:

\n\n

There are quite a few other RFID cards and devices, but the above are probably the most common, at least in Europe and North America (in Japan and a few other countries you'll also find Felica, Icode, and others). Not all are actually NFC (NFC is only a subset of all 13.56 MHz RFID cards).

\n

The UID is present on all those cards, it is used in the anti-collision protocol. The UID is supposed to be static, unique and non-rewritable, but of course you now have random UIDs, re-used UIDs, and carts with modifiable UID (used to clone existing cards).

\n

Using the UID to identify a card is very simple and used a lot for non-security-sensitive applications. As soon as you need security, you can just forget about that, you'll need something more secure.

\n

Most cards can store a lot more information, it can range from a few bytes to tens of thousands. There are often ways to protect the data (e.g. on most Mifare cards you can save data which is protected and can only be read if you have the relevant keys). Some cards like payment cards and the like can use asymmetric crypto and store a private key which you can read (but you can check the card knows the correct key by sending a challenge and verifying the signature it returns with the matching public key/certificate).

\n

To answer your questions specifically:

\n
\n

Is it possible to read more data?

\n
\n

Yes, there are commands to do so.

\n
\n

if so - what is the nature of this data? (and notably - is it random data?)

\n
\n

It would be the data you stored there. Before you write any data, depending on the card, it could be just 0s or some other fixed pattern, or just random data. Just think of it as a (very small) hard drive or USB stick. It holds blocks, you write data to it, and you can later read it back (taking into account various levels of protections if there are any).

\n

Some cards may also have other read-only data, though the details vary from card to card, you would have to check the data sheet for the specific model. Other cards may have dynamic data as well (e.g. number of activations).

\n

Some cards are able to generate dynamic data with a counter and a signature involving that counter, other data, and a private key (so one can even on some of them generate a dynamic URL which changes at each read). Again, specific to each type of card.

\n
\n

Are there different cards with the possibility to write "only UID" or "UID and more data"?

\n
\n

I'm not aware of any card on which you could only write the UID. Most cards to not allow changing the UID. And there is nearly always more data you can write to. However many cards can be write-protected, so you can write data, then set the card to read-only (either permanently or until you provide a password you set at the same time, depending on the card). Read-only mode can be set either card-wide or per block, depending on the device.

\n
\n

Is it possible to programmatically query/guess the type of card?

\n
\n

Very generally it can quite difficult given the very large number of card types but there are ways to do it for specific cards. See for instance the MIFARE type identification procedure

\n

For some cards/tags you can use the NFC TagInfo by NXP app on Android which will try to determine the tag type and provide more information, though results vary quite a bit depending on the type of card/tag. There is also an iOS version but it provides less information, and has probably more limited compatibility.

\n" }, { "Id": "6752", "CreationDate": "2023-11-15T04:09:02.943", "Body": "

I am using a Heos / Denon system. I have speakers in each room with home theatre support in two rooms (Living and Lounge). For some reason these two rooms are permanently grouped together.

\n

I can group others and ungroup them. A can even drag another room into this fixed group and remove that room. I just cant separate the Lounge and Living. This causes confusion sometimes when the wrong speaker starts emitting sound after its companion is switched off.

\n

Normal drag out or pinch out has no effect on this set. How can I ungroup them?

\n

\"image\"

\n", "Title": "How can I ungroup these permanently grouped speakers on my Heos System", "Tags": "|sound|", "Answer": "

Finally had an answer rom teh installer. It may help others.

\n
\n

Lounge/Living HEOS: These 2x zones are on the Denon AVR. There is only one HEOS streamer input, so you can only switch zones on/off and not ungroup. All other HEOS zones are fully matrixed with a HEOS streamer per zone.

\n
\n" }, { "Id": "6763", "CreationDate": "2023-11-26T03:51:59.683", "Body": "

I live in Australia. I have a garden bore with an electric bore pump for garden reticulation (irrigation). The bore supplies water to taps (faucets) in the garden, as well as sprinklers that are on separate timers. Consequently, power is continually supplied to the pump. On rare occasions one of the fittings snaps, as happened this morning, and water just continuously pumps out until we become aware of the breakage. This is bad enough if we are at home, but should it happen when no one is at home the pump might run for hours or even days. I could just put the bore pump on a smart plug and restrict the time that the power is supplied to the pump to the times required by the sprinklers, but that would mean manually turning on the power if we wanted to use a tap. What I would like is an outdoor smart plug that could not only remotely control power to the pump, but could also notify me when power is being consumed outside specified times. That way I could leave the power on all the time, as I currently do, but if I get a notification that power is being drawn at an unexpected time I could remotely turn the power off.

\n

Has anyone come across a smart plug with this capability?

\n", "Title": "Outdoor smart plug with real-time energy monitoring and notification", "Tags": "|smart-plugs|energy-monitoring-systems|", "Answer": "

My assumptions:

\n

I think the pump is always supplied with power today and its work is to pressurize the supply at the output so that the faucets and the timed sprinklers can do their work at the times they need to.

\n

If that is true, you are looking for a solution that detects when there is a break in the fittings which would result in a loss of that pressure.

\n

If those chain of assumptions is true, then, maybe something like this would be a solution for you. (I am not affiliated in any way to this product. Just using it as an example. There are many others that are in the same solution space.)

\n

https://www.amazon.com/Moen-900-006-1-Inch-Smart-Shutoff/dp/B081HT5LD6/

\n" }, { "Id": "6779", "CreationDate": "2023-12-10T20:20:19.740", "Body": "

I have a detached garage that has its own electrical panel. The previous owner of the house was an electrician and it's done really well, however, there is not a very strong Wi-Fi signal in the garage.

\n

I would like to have some smart outlets out there and such. There are 2 lines that run through the pipe that I can use (one was a 4-wire line for a house-switch to the eave lights on the garage, which also had a switch, and the other is a 3-wire spare). Currently, all, but one of the seven lines is being used by the sign off smart switch relay.

\n

I was in the process today of swapping out the sonoff mini with a UL certified Shelly (because the sonoff failed electrical inspection, since it is not UL, certified), and I thought, "boy, I wish there was an ethernet line already running through this electrical pipe out to the garage, then I could have a wired backhaul out there".

\n

I started thinking about it, and I thought, "huh, what if I actually just spliced in an ethernet connector and hooked up ethernet through these 12 gauge wires?"

\n

So I started googling and I learned about powerline ethernet adapters. I'm not sure if that would be the same thing as me just splicing in ethernet connectors or not. There's no outlet involved.

\n

I have read that you don't get very good speeds with powerline adapters, which is not an issue. I just want a couple of smart outlets and switches, etc.. and these two lines are just two of many that run out to the garage. Other than those, there are 200 W of other lines going out there that connect to the panel in the garage.

\n

The 12 gauge wires I have are threaded.

\n

So could this work? Can I just spice in a couple of ethernet connectors?

\n

What are the upsides and downsides of doing this? I understand that there should be issues with regard to shielding and data loss. I don't need it to be very fast, I just need it to be reliable.

\n", "Title": "Can I nut-in an Ethernet connector (from 28 gauge) to 12 gauge so that I can run Ethernet out to my detached garage?", "Tags": "|ethernet|wired|", "Answer": "

As mentioned, you can not use power cables for ethernet, otherwise why would there be special cables for it.

\n

You can use powerline adapters. The quality and speed of signal you get would be dependent on teh manufacturer and the price you pay. There are caveats in some ads for them, so read it carefully.

\n

You need at either end of the same cable (ie phase, without too many joins). If you are only going to use the ethernet for smart switches, then it will work just fine. For anything else, it is trial and error.

\n

You can also use better quality WAPs or even long distance ones. I am using an outdoor unit by Unify that easily reaches 100m.

\n" }, { "Id": "7819", "CreationDate": "2024-01-21T10:55:18.167", "Body": "

When I tell Alexa to be quiet, it often, but not always, responds with this.

\n

https://youtu.be/DZssCjoTg-0

\n

We have our wake word set to "computer" because Star Trek. As you can see from the video, it does it most of the time, but not consistently every time. I was wondering whether it is just me, but it also does the same for my wife. Other Echo Dot units in different locations also do the same.

\n

Questions

\n
    \n
  1. Why does it give this answer?
  2. \n
  3. How can I report this to Amazon?
  4. \n
\n", "Title": "Why is Alexa telling me to dial emergency services?", "Tags": "|smart-home|alexa|", "Answer": "

I just tried and my 4 devices dont do this. On the app, you can -

\n\n

Here you can see your commands and what Alexa interpreted it as. To tell it that it erred -

\n\n" }, { "Id": "7826", "CreationDate": "2024-02-03T04:38:20.983", "Body": "

I have a Fibaro Home Automation system with multiple Alexa devices. When someone leaves any of the 3 garage doors open, I have Alexa automatically notify us between 8pm and 11pm, hourly that the garage door is still open.

\n

This has been working fine for months. All of a sudden these Notifications are whisper quiet on all the devices. They are not automatically adjusted for the background noise level either. If the TV is on, then I almost never hear them.

\n

Is there a way to get Alexa to use the same volume level for everything it does.

\n

This sounds like this question but its different in that I don't want to customise it, I just want everything the same level. And its seven years later.

\n", "Title": "How can tell Alexa to use a different sound level for Notifications", "Tags": "|amazon-echo|sound|", "Answer": "

On being prompted in comments, I have two ways of doing it. It is worth noting that the Alarms, Timers and Notifications share the volume level.

\n

Voice

\n\n

On the App in 2024

\n\n" }, { "Id": "7833", "CreationDate": "2024-02-12T16:21:10.057", "Body": "

I am working on an IoT project where I have set up a Raspberry Pi as a guest OS in VirtualBox on my Windows host machine. Both the host and the guest OS have been assigned static IP addresses: 192.168.56.1 for the host and 192.168.56.2 for the guest.

\n

I am facing issues with network connectivity between the host and the guest OS. When I attempt to ping the guest OS (192.168.56.2) from the host, I receive the following message:

\n
\n

Pinging 192.168.56.2 with 32 bytes of data:
\nReply from 192.168.56.1: Destination host unreachable.
\nRequest timed out.
\nRequest timed out.
\nRequest timed out.

\n

Ping statistics for 192.168.56.2:\nPackets: Sent = 4, Received = 1, Lost = 3 (75% loss).

\n
\n

And when I try to ping the host (192.168.56.1) from the guest OS, I get the error message:

\n
\n

ping: connect: Network is unreachable

\n
\n

I have configured the network settings in VirtualBox to use a bridged adapter.

\n

Could someone help me troubleshoot this issue? I'm unsure why the ping requests are failing despite configuring the network settings correctly. Any insights or suggestions would be greatly appreciated.

\n", "Title": "Trouble pinging between host (Windows) and guest OS (Raspberry Pi) in VirtualBox", "Tags": "|raspberry-pi|", "Answer": "

You will have to enable networking between the two.

\n
\n

Try this:

\n

Setup the virtualbox to use 2 adapters: The first adapter is set to\nNAT (that will give you the internet connection). The second adapter\nis set to host only. Start the virtual machine and assign a static IP\nfor the second adapter in Ubuntu (for instance 192.168.56.56). The\nhost Windows will have 192.168.56.1 as IP for the internal network\n(VirtualBox Host-Only Network is the name in network connections in\nWindows). What this will give you is being able to access the apache\nserver on ubuntu, from windows, by going to 192.168.56.56. Also,\nUbuntu will have internet access, since the first adapter (set to NAT)\nwill take care of that. Now, to make the connection available both\nways (accessing the windows host from the ubuntu guest) there's still\none more step to be performed. Windows will automatically add the\nvirtualbox host-only network to the list of public networks and that\ncannot be changed. This entails that the firewall will prevent proper\naccess. To overcome this and not make any security breaches in your\nsetup: go to the windows firewall section, in control panel, click on\nadvanced settings. In the page that pops up, click on inbound rules\n(left column), then on new rule (right column). Chose custom rule, set\nthe rule to allow all programs, and any protocol. For the scope, add\nin the first box (local IP addresses) 192.168.56.1, and in the second\nbox (remote IP) 192.168.56.56. Click next, select allow the\nconnection, next, check all profiles, next, give it a name and save.\nThat's it, now you have 2 way communication, with apache/any other\nservice available as well as internet. The final step is to setup a\nshare. Do not use the shared folders feature in virtualbox, it's quite\nbuggy especially with windows 7 (and 64 bit). Instead use samba shares

\n\n

Follow this link for how to set that up:\nhttps://wiki.ubuntu.com/MountWindowsSharesPermanently

\n
\n

Refer to virtualbox networking

\n

PS: I recently joined in and do not have enough reputation to put this in comment. Hence, putting this in an answer. Hope this solves your problem.

\n" }, { "Id": "7837", "CreationDate": "2024-02-19T23:42:31.493", "Body": "

Car fuel level sensor

\n

Car fuel level monitoring is implemented and viewed on the dashboard with assistance of fuel level sensor inside the Automobile.

\n

Is it possible to remotely monitor a running automobile's fuel level with assistance of Smartphone apps,WI-FI, Internet of Things (IOT)?

\n

If yes, how?

\n

If no, why?

\n", "Title": "Car fuel level sensor remotely monitoring", "Tags": "|sensors|wifi|mobile-applications|", "Answer": "

Yes its possible within the limits that exists in the vehicle network ecosystem. Like one of the answer suggested you can plug a OBD2 reader and connect with it to get the readings.

\n

This has it's limitations though. The vehicle come with fuel level sensors but the values are not always broadcasted or made available to the BCM (body control module) with which these OBD readers communicate. The ability to be able to read such data gets a little complicated. You will ideally have to tap into the CAN network directly from the OBD2 port (pin 6 & 14) and reverse engineer the function address (jargon for vehicle CAN communication) that the fuel level is being mapped to inside this particular vehicle network. And to your surprise, these IDs are not standard across different manufactures or models. So, it becomes a long journey of reverse engineering. But assuming, if you are able to do that - then it is just a matter of writing a CAN payload targeting the function address to get the value in response.

\n

\"enter

\n" }, { "Id": "7864", "CreationDate": "2024-03-09T13:27:34.033", "Body": "

I use the latest version of app (2024.1.5-full) and HA (2024.3.0) and want to use the next alarm sensor as a trigger for an automation in HA. So I

\n\n

Now, I have created an automation for "time" as in When the time is equal to the entity my_phone. Next alarm and add my automation actions:

\n
alias: Morning light\ndescription: ""\ntrigger:\n  - platform: time\n    at: sensor.my_phone_next_alarm\ncondition: []\naction:\n  - service: light.turn_on\n    metadata: {}\n    data:\n      brightness: 120\n      transition: 15\n    target:\n      device_id: 019abc309c83cf3152e60b37ab1ea554\nmode: single\n
\n

When I select "run" to manually test this automation, everything works as intended. Only problem is: the automation is not triggered when the alarm time is reached.

\n

Moreover: When I add the condition that the automation should only be executed when my phone is connected to my local WiFi (sensor "WiFi Connection"), it ignores the condition apparently:

\n
alias: Morning light\ndescription: ""\ntrigger:\n  - platform: time\n    at: sensor.my_phone_next_alarm\ncondition:\n  - condition: state\n    entity_id: sensor.my_phone_wifi_connection\n    state: MyAwesomeWiFiSSID\naction:\n  - service: light.turn_on\n    metadata: {}\n    data:\n      brightness: 120\n      transition: 15\n    target:\n      device_id: 019abc309c83cf3152e60b37ab1ea554\nmode: single\n
\n

With this second example, the light turns on when I run the automation manually - EVEN IF I previously disconnect my phone from my WiFi network (i.e. they shouldn't turn on because of said condition). Without running all that manually, the automation is obviously not triggered at all as the first example wasn't, too.

\n

Any advice on this? How do I get this automation to work properly?

\n", "Title": "HomeAssistant: Companion App sensors won't trigger automation", "Tags": "|sensors|home-assistant|", "Answer": "

Well, boy was I surprised when this morning, the light turned on as planned even though the automation didn't work when testing it.

\n

My assumption would be that the sensor's data values are not propagated in time in HA? All in all, the "Next Alarm" sensor had the updated value in time when testing - and the sensor update schedule says "immediate" in the app. Leaves as the only explanation that the sensor's data is not immediately available to automations.

\n

Can anyone confirm/correct this hypothesis?

\n" }, { "Id": "7872", "CreationDate": "2024-03-18T14:58:55.210", "Body": "

I have an old macbook, that I don't use for anything. It sits on a shelf. It occurred to me that I could maybe run Home Assistant on it? But when I look at the install guide, and see they point to installing it in a vm, and I google it to see if there are native installs, but they all say "run it in a VM"

\n

I don't need a VM, and am not sure if the old macbook could really run virtualization, as it is kind of slow - but this old macbook is better than a raspberry pi, I think.

\n

Shouldn't it be able to run it natively?

\n

EDIT: The macbook is a Macbook Retina 12inch, Early 2015, and it seems quite slow, and doesn't get updates (running Mojave, 10.14)

\n", "Title": "Can I run Home Assistant natively on an old macbook?", "Tags": "|home-assistant|", "Answer": "

Yes you can. My suggestion is to use docker. Just install it on ur machine\nhttps://docs.docker.com/desktop/install/mac-install/\nand then use docker to deploy Home Assistant\nhttps://www.home-assistant.io/installation/

\n" } ]