CreationDate
stringlengths
23
23
Answer
stringlengths
60
24k
Tags
stringlengths
5
68
Title
stringlengths
15
140
Id
stringlengths
1
4
Body
stringlengths
106
19.3k
2018-09-28T12:46:12.023
<p>This question generated considerable interest (7 as of this writing). So I am posting a followup answer with a local only solution.</p> <p>I have accepted @hardillb answer as I have yet to find a method allowing Alexa to control relative Volume using a local only device.</p> <p>However, there is a way to control relative TV sound levels using a local only device. By using a device name like "TV sound" and phrases like "Alexa, turn up the TV sound", Alexa can be coaxed into thinking it is turning up and down the brightness of a device called "TV sound". In accepting this approach we are forced to use Alexa's absolute brightness levels while trying to control a relative sound level TV. The first thing we notice is that we can only turn down TV sound a few times before we exhaust Alexa's brightness range (Alexa jumps about 25% for each dimming command). But we can also tell Alexa the brightness our device is set to at the end of each command. If we tell Alexa the brightness is always 50% then Alexa will always respond with more than 50% when we tell Alexa to "turn up TV sound" and less than 50% when we tell Alexa to "turn down TV sound".</p>
|alexa|esp8266|
How does an ESP8266 "advertise" it can handle Alexa "relative volume" commands
3471
<p>As I understand more I WILL edit this question. For now, I am guessing at what I need. To make it easier for people to help, I'll tell you the over all purpose:</p> <p>I have programmed an ESP8266 to advertise it is the TV and that it can turn the TV on / off. The ESP8266 actually transmits the absolute on / off codes to the TV using IR signals. I believe I have added a second &quot;advertisement&quot; for yet another on / off feature to the same ESP8266 device.</p> <p>However, what I really want to add is a &quot;relative volume&quot; device. I believe I need to do this by using XML. That is, I believe I need to modify the XML transmitted to Alexa to not only advertise the on / off device but to also advertise a relative volume device.</p> <p>Where can I find examples where a relative volume device is advertised to Alexa?</p> <p>To clarify my objective, let me add an example:</p> <p>If I say</p> <blockquote> <p>&quot;Alexa, turn on the TV&quot;</p> </blockquote> <p>the TV will turn on. But, if I say</p> <blockquote> <p>&quot;Alexa, turn up the volume on the TV&quot;</p> </blockquote> <p>Alexa will respond</p> <blockquote> <p>&quot;TV does not support that&quot;</p> </blockquote> <p>I started by using <a href="https://github.com/kakopappa/arduino-esp8266-alexa-wemo-switch" rel="nofollow noreferrer">the code here in this github.com project</a> and added additional code to handle transmitting the IR signals to the TV. This project appears to transmit this XML in response to an Alex asking for what the ESP8266 is capable of doing:</p> <pre><code>HTTP.on(&quot;/eventservice.xml&quot;, HTTP_GET, [](){ Serial.println(&quot; ########## Responding to eventservice.xml ... ########\n&quot;); String eventservice_xml = &quot;&lt;scpd xmlns=\&quot;urn:Belkin:service-1-0\&quot;&gt;&quot; &quot;&lt;actionList&gt;&quot; &quot;&lt;action&gt;&quot; &quot;&lt;name&gt;SetBinaryState&lt;/name&gt;&quot; &quot;&lt;argumentList&gt;&quot; &quot;&lt;argument&gt;&quot; &quot;&lt;retval/&gt;&quot; &quot;&lt;name&gt;BinaryState&lt;/name&gt;&quot; &quot;&lt;relatedStateVariable&gt;BinaryState&lt;/relatedStateVariable&gt;&quot; &quot;&lt;direction&gt;in&lt;/direction&gt;&quot; &quot;&lt;/argument&gt;&quot; &quot;&lt;/argumentList&gt;&quot; &quot;&lt;/action&gt;&quot; &quot;&lt;action&gt;&quot; &quot;&lt;name&gt;GetBinaryState&lt;/name&gt;&quot; &quot;&lt;argumentList&gt;&quot; &quot;&lt;argument&gt;&quot; &quot;&lt;retval/&gt;&quot; &quot;&lt;name&gt;BinaryState&lt;/name&gt;&quot; &quot;&lt;relatedStateVariable&gt;BinaryState&lt;/relatedStateVariable&gt;&quot; &quot;&lt;direction&gt;out&lt;/direction&gt;&quot; &quot;&lt;/argument&gt;&quot; &quot;&lt;/argumentList&gt;&quot; &quot;&lt;/action&gt;&quot; &quot;&lt;/actionList&gt;&quot; &quot;&lt;serviceStateTable&gt;&quot; &quot;&lt;stateVariable sendEvents=\&quot;yes\&quot;&gt;&quot; &quot;&lt;name&gt;BinaryState&lt;/name&gt;&quot; &quot;&lt;dataType&gt;Boolean&lt;/dataType&gt;&quot; &quot;&lt;defaultValue&gt;0&lt;/defaultValue&gt;&quot; &quot;&lt;/stateVariable&gt;&quot; &quot;&lt;stateVariable sendEvents=\&quot;yes\&quot;&gt;&quot; &quot;&lt;name&gt;level&lt;/name&gt;&quot; &quot;&lt;dataType&gt;string&lt;/dataType&gt;&quot; &quot;&lt;defaultValue&gt;0&lt;/defaultValue&gt;&quot; &quot;&lt;/stateVariable&gt;&quot; &quot;&lt;/serviceStateTable&gt;&quot; &quot;&lt;/scpd&gt;\r\n&quot; &quot;\r\n&quot;; HTTP.send(200, &quot;text/plain&quot;, eventservice_xml.c_str()); }); </code></pre> <p>I assume, in order to support (offer up to Alexa) relative volume control, all that needs to be done is add a description of the volume control feature to the above XML. However, I have not been able to find out how to do that.</p>
2018-09-28T23:09:06.100
<p>In order to use USB gadget mode, you either need two USB controllers native on the device (so one can act as master, and one as slave), or all of the other peripherals (Bluetooth, Wifi, Ethernet, Keyboard, etc) will need to be connected by serial, SPI or something similar.</p> <p>Since the 'normal' use case of a single board Linux computer will be to support some extra peripherals over USB, the two USB interface is probably as common as the very cut-down approach of the Pi-Zero. This port may be described as 'USB on the go', and you need to check that it will operate at the same time as the other peripherals that you need.</p> <p>You will probably find a wider range of microcontroller parts which support this peripheral combination, but to use these you would need lower-level software (including interfacing with the USB stack at some level - maybe a serial over USB).</p>
|raspberry-pi|linux|usb|banana-pi|
SBC similar to Banana/Raspberry Pi with USB comms
3474
<p>I’m looking forward to develop a handheld device that could communicate with PCs over WiFi network, Bluetooth and USB. I have looked at the forums about Rpi, and they say it does not support Usb communication with PC since they are both masters. So my question is what are some boards that have equivalent specifications to Rpi 3+ or Banana Pi m2/3 that allow USB/Bluetooth/WiFi communication with PC? </p> <p>Edited: Limit those devices to those that capable to run Linux/Ubuntu/Raspbian OSs</p>
2018-09-29T18:52:41.137
<p>Sending packets at a high rate 'feels' wrong, and does increase the chances of seeing some odd effects when the network latency spikes occasionally. In this instance, the fact that you're in a development cycle means that any period much above 10-30 sec will start to become painful to debug or adjust.</p> <p>Batching up the readings is one option, or you can decimate (sample rate reduce) your readings before uploading them (depending on the application). You can also apply a rate-adaptive approach if there are some events that you want to be able to post-process more precisely.</p> <p>At the highest level of 'not much change', you still probably want to see that the endpoint is alive, so even if you only used daily averaged/processed data, making an upload every hour might make some sense. If you want to measure occupancy, maybe anything over a 10% change would justify an instant update.</p> <p>If you do anything other than a simple periodic update, consider what the worst case fault condition might look like. Even though a broken sensor won't burn through your monthly broadband allowance, it could cause some effects on your LAN which are inconvenient. It's always useful to think about the extra steps which you would take if this project made it to volume deployment, or if it becomes an exploit target.</p>
|sensors|data-transfer|cloud-computing|
How rapidly should the data from IoT devices be sent to the cloud?
3479
<p>I am configuring a Sonar Distance sensor to send data to a cloud. But the problem for me now is: I do not know when to send the data to this cloud.</p> <p>As far as I know, I could send the data continuously every second. But in my case, the sensor does not always "sense" the signal. So sending data like the above method seems to be wasteful on network bandwidth.</p> <p>I want to ask: Normally, with this kind of sensor; How do people often configure to send the data to a cloud?</p>
2018-10-01T11:08:20.460
<p>you describe node-red, a free input-processing-output app well-suited for IOT. It lets you drag and drop many forks and conditionals to your info flow. It support mqtt, sockets, and http out of the box. If you need more power, you can write complex JS functions with a central state to supplement the GUI-based tools.</p>
|monitoring|
Handle real time rule based events generated by IoT devices
3487
<p>I am working on IoT project in which I need to send alerts to users based on rules which already defined by user like if Temperature value matches certain condition then send alerts to users and their are multiple conditions.<br> I achieved to send the alerts to users when condition matches by using following steps:</p> <ol> <li><p>Store threshold values, condition of a device in mysql.</p></li> <li><p>When the device data comes to server I checked the current value with given condition with threshold value and send the alert.</p></li> <li><p>And also there are multiple conditions associated with devices so I need to check each and every condition.</p></li> </ol> <p>Is there any technology that I can use in my project.</p>
2018-10-07T18:39:28.797
<p>Really simple. Just hold down both Remote pair button at the exact same time next to each bulb you want to pair with for 10s. Boom!</p>
|ikea-tradfri|
TRÅDFRI lights and multiple swiches
3508
<p>I have 3 TRÅDFRI 30W drivers in my kitchen, and succeeded in pairing all of them to the same remote. However, if I pair a lamp to a 2nd remote, it stops responding to the first one. I was hoping to use 2 remotes, to get 2-way control.</p> <p>From the documentation, one remote can control 10 lamp drivers, but it isn't clear to me if only one remote can drive a lamp at once. If it's possible, is there a reliable way to set it up? </p> <p>I don't have a gateway. If I have a gateway, can I use IFTTT to cross connect two driver/remote pairs or subsets to keep the drivers in sync in the way that I could do with SonOff?</p>
2018-10-09T14:42:46.970
<p><a href="https://www.aimagin.com/en/waijung-2-for-esp32.html" rel="nofollow noreferrer">Waijung 2 for ESP32</a> is what you need exactly.</p> <p>Waijung 2 for ESP32 is an Embedded Coder Target specifically for ESP32 microcontroller family.</p> <p>Not only it can generate C code from your Matlab and Simulink blocks, it also supports advanced features such as Wifi External Mode simulation, allowing you to tune parameters and monitor signals from connected ESP32 hardware in real time, and much more. You can learn and take the full benefits of Model Based Design using affordable, popular, and powerful hardware.</p>
|communication|arduino|esp32|
Can I use MatLab code in esp32?
3518
<p>We've some method to run the MatLab code in esp8266 microcontroller. We can manipulate Arduino pins using Simulink in MatLab. </p> <p>Can we do the same with an esp32 microcontroller as their code can be executed in Arduino IDE? </p> <p>In Arduino, we are only able to execute Simulink or its own code at a time. Will I have the same problem here with NodeMCU ESP32-WROOM-32D?</p> <p><a href="https://drive.google.com/file/d/1GA1Bti1fKYfJrWEaXh_Yn_nN72KD_jbo/view" rel="nofollow noreferrer">Datasheet for the above mentioned microcontroller</a></p>
2018-10-10T09:39:01.777
<p>Let me answer this in a slightly frivolous way, better answers welcome.</p> <p>After considering all the above, chose:</p> <ul> <li>Something suitable from your existing hardware</li> <li>Something nicely optimised for the application</li> <li>Something you want to learn about</li> <li>Something cheap enough, reliable enough and easy enough</li> </ul> <p>Come next year, you might make a different choice for the same problem.</p>
|sensors|microcontrollers|system-architecture|
How to decide how to select an endpoint device
3520
<p>This question is intentionally rather open ended, and potentially opinion based, but it is intended to act as a catch-all for the questions on how to select a device for a sensor/endpoint. Any question which intends to be more specific would need to start with assumptions about all of these points.</p> <p><strong>Question:</strong> In addition to the points below, how would someone go about selecting a good device for the sensor/endpoint part of an IoT system?</p> <p>There are already good questions <a href="https://iot.stackexchange.com/questions/880/selecting-a-microcontroller-for-a-battery-operated-data-collection-project">1</a>, <a href="https://iot.stackexchange.com/questions/283/what-factors-to-consider-when-selecting-an-integrated-wifi-mcu-for-a-low-powered">2</a>, <a href="https://iot.stackexchange.com/questions/1963/what-is-the-simplest-programmable-iot-device-that-can-connect-to-wi-fi">3</a> on how to select specific devices for a well-defined application, and questions that address some of the points below in detail.</p> <p>There are a number of clear factors which will help to determine which devices are suitable for a particular application. In the end, there are likely to be many good choices, and no obvious 'best'.</p> <ul> <li><p><strong>Communications</strong>. A sensor will usually rely on a wireless interface, sometimes a combined power/signal wired interface might be appropriate. Depending on the application, this might be well defined, or there might be some flexability. Communication can be built-in, or a peripheral depending on how good a fit the other parameters are. Depending on the device deployment, there is a scale of roughly increasing cost/range from <em>wired</em> (Ethernet, serial, USB), <em>short range</em> (BlueTooth, WiFi and mesh variants), <em>developed areas</em> (SMS, LoRaWan, NB-IoT), <em>remote</em> (satellite). Bandwidth and latency also factor into the choice of communication protocol. Note that often the communications parameters are fairly well defined up-front before worrying about specific device selection.</p></li> <li><p><strong>Processing power</strong>. Some sensors are just taking input values and generating packets, others are doing complex signal analysis (face recognition for example). Some communication protocols need a reasonable processing power, so result in devices which have a small (but useful) amount of left-over processing power.</p></li> <li><p><strong>Device or Module</strong> Modules have the advantage of providing an off the shelf, pre-certified solution. They may also be cheap due to economy of scale. If you already need a custom PCB, a device might allow better optimisation.</p></li> <li><p><strong>Power Consumption</strong> If the sensor is battery powered, this will limit the communication choices, and also tends to suggest a device which will spend most of it's time in a deep sleep state (i.e. running a real-time embedded OS rather than Linux).</p></li> <li><p><strong>Power Source</strong> Related to consumption, but acting as a different type of constraint. Mains/battery/solar/harvesting are the obvious choices.</p></li> <li><p><strong>Security</strong> Often security can be <a href="https://iot.stackexchange.com/questions/554/is-there-any-advantage-in-encrypting-sensor-data-that-is-not-private">ignored</a> (for one off or evaluation projects). If security is important, what is the threat model? Are encryption accelerators important? Do you need a secure bootloader with the ability to prevent firmware roll-back, whilst at the same time allowing over-the-air firmware updates?</p></li> <li><p><strong>Peripherals</strong> If your sensor uses SPI, you need an SPI peripheral. If your sensor is a USB gadget, you need USB-on-the-go. For a high-end sensor, maybe you need a touch-screen display. Define the minimum set for your application. Memory sizes and external storage may also be relevant.</p></li> <li><p><strong>Production or Project</strong> There are economy of scale questions, availability, toolchain, example code, etc. issues that might affect the choice depending if this is a one-off project, a mock-up learning tool, or a full high volume production concept.</p></li> <li><p><strong>Coding Style</strong> Depending on your <a href="https://iot.stackexchange.com/questions/1425/is-there-a-big-jump-between-prototyping-on-a-pi-and-using-a-microcontroller">existing experience</a>, migrating to a linux or mcu development environment might be a significant cost, might be necessary, or might be the reason for the project. Often you might prototype on a more powerful/flexible single-board computer, even if an optimised sensor/hub architecture would result in splitting the functionality between two devices.</p></li> <li><p><strong>Vendor</strong> There may sometimes be a specific reason to only consider devices from a specific vendor, but generally this is one of the most opinion based aspects of device selection. Free samples, good development boards etc. might be a factor here (although someone pays for this in the end).</p></li> <li><p><strong>Software Stack</strong> You may be set on using a specific software stack (maybe to integrate with a cloud provider), in which case there can be hardware requirements (RTC, TRNG), or you might just require a certain library to be available (TLS, COAP). Each RTOS will require certain features, Linux will require more (mmu specifically).</p></li> </ul>
2018-10-17T06:56:21.000
<p>Firstly, if you use HTTP then it is very likely the HTTP headers will dwarf any message you are actually sending. A low overhead protocol like MQTT may be better suited if one of your key aims is to reduce bandwidth usage.</p> <p>As for the formatting of the data it comes down to the type of values being sent and if you need the data to be human readable at all times.</p> <p>If you want really tight packing then something like <a href="https://developers.google.com/protocol-buffers/" rel="nofollow noreferrer">Protocol Buffers</a> but you'll need an encoder/decoder on each end to turn it back into human readable values.</p>
|communication|
Standard message structure for communication in HTTP
3536
<p>My question might be a little strange but I can't find any answer to it. I'm designing a simple IoT system that has some devices as a client and a server that controls these clients like reading sensors, sending commands, etc. In the communication side, I can use any internet-based protocols like HTTP, UDP, TCP, etc.</p> <p>On the other side devices using cellular network 2G to connect to the network which has a low bandwidth. Is there any standard message structure between client and server?</p> <p>For example, If I want to set an led on a device I can send <code>led=1</code> or I can use a JSON-based structure like <code>{led: 1}</code>. But I have a very low bandwidth and I want to use a simple structure that uses compact size. Is there any standard at all?</p> <p>A device might have up to 10 sensors and 10 outputs and I want to get values as fast as possible.</p> <p>I know I can compress my messages but I need a robust and compact message structure.</p>
2018-10-19T08:47:26.720
<p>The Smart Home Skill API does not use the "Alexa, Ask ..... " syntax. It just uses the name of the device you want to control.</p> <p>This is a very deliberate step, it makes for a much more natural way of speaking to the Alexa for control of devices. e.g.</p> <blockquote> <p>Alexa, turn off the kitchen light</p> </blockquote> <p>It also handles all the entity extraction and translation for different languages for you.</p> <p>If you want to use the "Alexa, Ask ..." pattern then you will need to define a skill using the <a href="https://developer.amazon.com/docs/custom-skills/steps-to-build-a-custom-skill.html" rel="nofollow noreferrer">normal</a> (Not Smart Home Skill) API. In this case you will need to use the tools to map out all the possible sentences that the user might use and flag the entities in the sentences.</p> <p>This is lot more work for a way of interacting with a device and should only be done if you really need to do tasks that can not be controlled by the built in Smart Home Skill actions.</p>
|alexa|
Steps to turn On/OFF the light using custom skill of alexa
3540
<p>Please guide with the steps and program to turn On/Off the light using custom skill of Alexa as I want to use invocation command if custom skill. Please guide me as I am very new to this topic.</p> <p>I have implemented the smart light control using the amazon smart light document.Coded in Lambda function and in Samsung smart light developer console using the client id and client Secret. </p> <p>Now I want to add invocation command, so want to proceed with custom skill.</p>
2018-10-23T17:42:55.967
<p>Yes, providing the device provides it's own control infrastructure. The model is (roughly):</p> <ul> <li>Device connects to a gateway</li> <li>Gateway exposes an API which supports a variety of 3rd party systems</li> <li>Control messages are sent from each system which are unaware of the other systems</li> </ul> <p>The exception to this model might be where a device uses one of your system hubs to provide it's connectivity. In this case, you're relying on one integrator exposing locally connected devices to a competitor.</p> <p>You can refer to this <a href="https://iot.stackexchange.com/questions/1119/what-is-the-typical-network-topology-for-an-iot-network">question</a> for a general idea of how devices connect together. An endpoint may have a direct network connection (for example if it has WiFi), in which case many services can establish a connection. If the endpoint only uses bluetooth or ZigBee then some sort of hub is needed. Of course, this hub can be a device you provide if the protocols are open.</p>
|alexa|ios|apple-homekit|
Controlling one device using Alexa and Siri at the same time?
3550
<p>Can I have one device set up with HomeKit and Alexa at the same time e.g. a motion sensor, or a smart plug?</p> <p>I must stress I am asking whether it is possible to set up the device with the two services <strong>at the same time</strong>. </p>
2018-10-28T16:39:13.477
<p>If you just want lights and sockets, IKEA have just added Smart Sockets to their Trådfri range.</p> <p>They have a range of different bulbs and include on/off control, dimmers and multifunction (dim/on/off/colour temp). The whole system runs on Zigbee and can include an optional hub that allows control via an App/Alexa/Google Assistant/Apple Homekit.</p> <p>The hub also provides a CoAP interface so you can add some <a href="https://www.hardill.me.uk/wordpress/2017/04/06/fist-pass-tradfri-mqtt-bridge/" rel="nofollow noreferrer">DIY</a> control.</p> <p>Prices are (IMHO) pretty cheap.</p> <p>They support US/UK/Euro socket types and threaded bulbs.</p> <p>The bulbs are <a href="https://huehomelighting.com/how-to-add-ikea-tradfri-lights-to-philips-hue-bridge/" rel="nofollow noreferrer">known</a> to work with Philips Hue systems if neded.</p>
|smart-home|smart-plugs|smart-lights|
Which items work together for a smart home? (In Europe)
3558
<p>I'm trying to setup a smart home and I know that I need couple of plugs and 2 switch and some lamps. What I don't know is what extra do I need and how can I join them together?</p> <p>After some research, my conclusion was that products marked as Zigbee can work with products from other producers. </p> <p>So the first question is how?</p> <p>Then after some other research, it seemed like xiaomi has the best collection so I could ignore the risk and just go for all from same brand but for some it still needs a controller and the controller itself and the plugs are all Chinese socket compatible and I couldn't find any adapter cheaper than $10 that is similar to Chinese (it says Australian to EU adapter, not sure if it works for Chinese). Meaning that the adapter is going to cost almost the same as the plugs and still not sure if it's going to work.</p> <p>I thought of buying the plugs from another brand and so asked the question below some days ago. Still no answer. <a href="https://iot.stackexchange.com/questions/3556/can-i-use-xiaomi-smart-light-switch-or-other-smart-switches-to-control-devices-f">Can I use Xiaomi Smart Light Switch&#160;or other smart switches to control devices from other producers?</a></p> <p>So now the question is if you know a combination of devices that you know will work and save me from this wondering </p>
2018-10-28T17:51:04.673
<p>The API for this sort of thing is <a href="https://developers.google.com/actions/smarthome/" rel="nofollow noreferrer">here</a></p> <p>Google Assistant lets you write Smart Home Actions which let you add your devices to Model and then pass messages to your backend to then control the devices.</p> <p>Unless you want to end up writing a LOT of code, do a load of testing and then get it approved by Google, you don't want to try and do this from scratch. Using an existing Open Source framework like <a href="https://www.home-assistant.io/" rel="nofollow noreferrer">Home Assistant</a> that supports Google Assistant. Home Assistant also supports MQTT.</p> <p>At some point I'll get round to finishing my Node-RED Google Assistant Smart Home node to go with my Amazon Alexa version. </p> <p>Edit:</p> <p>My Google Home Action for Node-RED is now live <a href="https://googlehome.hardill.me.uk" rel="nofollow noreferrer">here</a></p>
|mqtt|google-home|google-assistant|
How to connect Google Home with a DIY home automation system?
3559
<p>I have a few devices which ultimately talk to an MQTT bus. This bus is monitored by my own program (in Python) which makes decisions based on the context ("scenarios").</p> <p>I am considering to add a Google Home speaker to this (I do not own one yet) and i am wondering whether it is possible to connect it to my system.</p> <p>I imagine that there is a need to </p> <ul> <li>explain to Google that when I say "switch on the lights in the living room", it needs to send/set a flag for "lights in the living room" to be "on" in my profile</li> <li>get this message / flag to my orchestration program (either by pooling Google, or via websockets, or via another protocol)</li> </ul> <p>Is this at all possible for DIY orchestrators?</p> <p><strong>If so - is there a reasonable documentation for this?</strong> I searched in Google and surprisingly I didn't not find anything (I am used to its API docs as I retrieve Calendar and Directions informations from there). There is <a href="https://assistant.google.com/explore?hl=en_us" rel="noreferrer">quite a lot of advertisement</a> on what it can do and all the devices it can connect to but nothing API-like.</p> <p>I initially thought that <a href="https://developers.google.com/actions/extending-the-assistant" rel="noreferrer">Actions</a> would be the way to go but it looks like this is a way to extend Google Assistant (and Google Home) to new actions. My actions are (so far) quite standard - it is rather the "where to apply them" which I do not know how to approach.</p>
2018-10-31T09:34:54.813
<p><code>localhost</code> always points to the machine the code is running on. </p> <p>In this case the lambda is running on one of Amazon's machines so the web app you are trying to access will not be there (as it's running on your machine).</p> <p>You will need to deploy your web app to somewhere public (e.g. a AWS VM or Light sail instance) and update the lambda to point to that location.</p>
|alexa|aws-iot|aws|
Alexa - call web API in localhost from AWS Lambda Console
3565
<p>I have my Alexa's lambda function on AWS Lambda Console. There I call a web API I created.</p> <p>If I call my web API on Visual Studio Code, it works great. But if I use Alexa Developer Console to call my web API, it always says: </p> <blockquote> <p>Error: connect ECONNREFUSED 127.0.0.1:63713</p> </blockquote> <p>It's because my web API is in <code>localhost</code>? How can I solve this? I'm struggling yet to testing in local with Alexa Developer Console...</p> <p><strong>My code:</strong></p> <pre><code>var url = "http://localhost:63713/_apis/v1.0/Car/GetCarById?id=1"; http.get(url, function (res) { var webResponseString = ''; if (res.statusCode != 200) { doWebRequestCallBack(new Error("Non 200 Response"), null); } res.on('data', function (data) { webResponseString += data; }); res.on('end', function () { var webResponseObject = JSON.parse(webResponseString); doWebRequestCallBack(null, webResponseObject); }); }).on('error', function (e) { doWebRequestCallBack(new Error(e.message), null); // &lt;-- where I recive the error message }); </code></pre>
2018-11-02T21:42:22.407
<p>The most obvious answer would be to use the <code>azimuth</code> parameter, and check for 180 (south). The sun component also has a 'rising' state (before noon), and <code>next_noon</code> time.</p> <p>If you follow the <a href="http://aa.usno.navy.mil/data/docs/AltAz.php" rel="nofollow noreferrer">link</a> to the US Naval observatory, you can print out a table of solar position for any particular location, on any particular date. For my location (52N), I see the sun crosses the horizon around 7am, 4pm at this time of year, and reaches a maximum elevation of 21.5 degrees. In the middle of the summer at the same location, I get 4am, 8pm and 61 degrees elevation.</p> <p>There is no simple calculation in this case. The zero points are constant as being the start and end of the day (from which you can pick earlier or later references), but the elevation does not make any correction for your location or the time of year.</p> <p>Regardless of how you determine the correct angle for 'noon', you would need to repeatedly update this based on the date.</p> <p>A better approach might be to determine the relevant time of day based on location, since a constant UTC time might be a close enough approximation. Here, 12:00 is good enough (but I have the meridian just a few miles away).</p>
|smart-home|home-assistant|
How to trigger on solar noon in Home Assistant?
3578
<p>The sun component has <code>sunset</code> and <code>sunrise</code> events, but I would like to trigger on <a href="https://en.wikipedia.org/wiki/Noon" rel="noreferrer">solar noon</a>. <a href="https://www.home-assistant.io/docs/automation/trigger/#sun-trigger" rel="noreferrer">This page</a> tells me I can use the elevation for this. What number should I enter to achieve what I want?</p>
2018-11-05T15:25:44.497
<p>Although the Lowe's troubleshooting steps recommend re-pairing the unit to the Iris hub via WIFI, I found a much easier method.</p> <p>Lowe's recommends tapping the Tank Light five times on the water softener unit itself to reset this WIFI-enabled device, then follow the pairing instructions on the Iris app to repair the device (see video instructions at <a href="https://www.youtube.com/watch?v=uhJQXagqjTg" rel="noreferrer">https://www.youtube.com/watch?v=uhJQXagqjTg</a>). </p> <p>However, I tried just unplugging the power cord from the wall socket for ten seconds and then plugging it back in, and that fixed it right up without having to go through the complex re-pairing process! </p>
|smart-home|wifi|
"No connection" reported on Iris By Lowe's Android app for WIFI-enabled Whirlpool Water Softener (WHESCS)
3583
<p>Not sure what caused it, but my water softener stopped connecting to the Iris app recently. Has anyone found an easy solution?</p>
2018-11-07T15:08:08.363
<p>Precise Description for the Query</p> <h3>Initial NTP time sync</h3> <ol> <li><p>In the Yocto build system under <code>conf/local.conf</code> add <code>ntp</code> recipe as follows:</p> <pre><code> IMAGE_INSTALL_append = " ntp" </code></pre></li> <li><p>on Target board initially stop the <code>systemd</code> service:</p> <pre><code> systemctl stop ntp </code></pre></li> <li><p>Assuming board is connected to the internet:</p> <pre><code> ntpd -gq </code></pre> <p><strong>Info</strong>: Check time using <code>date</code></p></li> <li><p>For safe side also sync the Hardware clock to NTP time:</p> <pre><code> hwclock -w --localtime </code></pre></li> <li><p>Restart the <code>systemd</code> service</p> <pre><code> systemctl start ntp </code></pre></li> </ol> <h3>Setup a local NTP Server on Embedded Board</h3> <ol> <li><p>Stop the <code>systemd</code> service:</p> <pre><code> systemctl stop ntp </code></pre></li> <li><p>Edit the <code>/etc/ntp.conf</code> to make the embedded board broadcast the NTP timestamps on port 123. Add the following line:</p> <pre><code> # Here the IP Address could that of your board but make sure to use # Broadcast address (x.x.x.255) and if you have a larger network # select your subnet masks accordingly broadcast 192.168.1.255 </code></pre></li> <li><p>Restart the <code>systemd</code> service:</p> <pre><code>systemctl start ntp </code></pre></li> </ol> <h3>Achieving timesync with Sensor Nodes</h3> <ol> <li><p>Assuming one has boards that can be programmed with Arduino; download the <code>NTPClient</code> Arduino Library.</p></li> <li><p>In your Sketch use the <code>NTPClient</code> constructor to connect to your Local NTP server via its IP address</p> <pre><code>NTPClient timeServer(ntpUDP, "192.168.1.123", 0, 60000); </code></pre></li> </ol> <p>and obtain the timestamps from the Local NTP Server</p> <h2>References</h2> <ul> <li><a href="https://github.com/arduino-libraries/NTPClient" rel="nofollow noreferrer">NTPClient Arduino Library</a></li> <li><a href="https://askubuntu.com/questions/14558/how-do-i-setup-a-local-ntp-server">Ask Ubuntu SE Query as mentioned by @hardillb's answer above</a></li> </ul>
|arduino|linux|
Make Embedded Board NTP Server for Arduino boards in the subnet
3592
<p>I have an Embedded Board where the distribution on it is a custom Linux Distribution based on Yocto.</p> <p>I have added <code>ntp</code> and via <code>ntpd</code> I will sync time with the common ntp pools either via UMTS 3G dongle or Ethernet.</p> <p>This board along with some ESP32 PoE board by Olimex are connected via an Unmanaged Switch. </p> <h3>Purpose</h3> <p>The ESP32 boards have sensors that collect information add a timestamp to them and send it to the InfluxDB running on the main embedded board via Ethernet making it a Wired Sensor Network. These ESP32 boards also have an RTC DS3231 on them so I want them to first get the time from a NTP Server running on this embedded board to sync the RTC and then send information to the InfluxDB.</p> <h2>Questions</h2> <ol> <li><p>How does one create an NTP server on the Embedded Board? Can I add a line in the <code>ntp.conf</code> file that can be used to step up a server with for e.g. NTP server at <code>192.168.4.11</code>? Using this IP address in my arduino code I can ask for the timestamps</p></li> <li><p>In case of testing, If I somehow setup a NTP server on the Embedded Board, how can I initially test the time coming from it? Is there a command line utility to poll the NTP server and see if the time coming is correct or not on a regular computer? </p></li> </ol>
2018-11-09T22:22:29.667
<p>Most of the major provider of mass IoT services (AWS, Microsoft, IBM) seem to have settled on <a href="http://mqtt.org/" rel="nofollow noreferrer">MQTT</a>.</p> <p>The MQTT broker runs in the cloud and the devices connect out the broker (this gets round the NAT problem) and then subscribes to topics on which messages are published. Topics can be general or specific to the device/client.</p> <p>The protocol also has a built in keep alive checking to determine if the device is still working and the broker can publish a special message (Last Will &amp; Testament) if the device goes offline unexpectedly.</p>
|communication|
Reverse rpc/device control
3599
<p>Is there a common standard/practice for having an IOT device establish a connection to a command and control server, and then act in a server role (i.e. the C&amp;C sends requests to the device and the device sends back responses)? Something in the vein of reverse HTTP or RPC.</p> <p>EDIT: an example use case: The device is behind a NAT gateway and the C&amp;C is unable to initiate a connection to it. We want to send a "ping" message the device (to see if it's on and healthy or something) and receive a "pong" in reply. </p>
2018-11-13T01:34:01.033
<p>LoRaWAN localization uses TDOA: Time Difference Of Arrival.</p> <p>Gateways <em>with the precise timing extension</em> are all synchronized via GPS. When a device sends a frame, the very accurate time at which the frame was emitted is unknown, but by recording the exact time each of several gateways receive that frame (compared to GPS time), and knowing the position of gateways, you can calculate the position of the device.</p> <p>On the gateway side, that requires hardware capabilities beyond what a SX130x offers, a clear view of the sky, some calibration and a clever solver somewhere is a server room.</p> <p>On the device side, that requires nothing.</p>
|lora|lorawan|geo-tagging|
How does LoRa based geolocation work? How does it measure distance?
3605
<p>The speaker in the video <a href="https://www.youtube.com/watch?v=WEqLESKW6N8" rel="nofollow noreferrer">Richard Lansdowne - LoRa Geolocation</a> mentions straight-off that he won't be quoting an accuracy for geolocation using LoRa, and it's of course not the ideal signal you would use.</p> <p>A comment below the video says:</p> <blockquote> <p>Excellent Richard, you nailed it! You've succinctly described the IoT asset tracking problem and what we need to do going forward to make this real.</p> </blockquote> <p>So far I can't figure out, at least from the video, how this might actually be done using standard LoRa and perhaps LoRa-WAN. How would LoRa protocol estimate distance in order to triangulate, assuming that's how this works? What would be an approximate best-case accuracy based on hardware limitations? (of course your milage may vary)</p>
2018-11-14T12:40:46.403
<p>Yes, that should work. You will need a static way to address your VPN server (no need to pay license fee, just use OpenVPN), but this could be a AWS instance with the DNS entry.</p> <p>But it sounds like you don't control the network that this will all be connected to. You need to talk to the owners of this network and explain what you are doing as you are opening up a way to remotely access their network (the VPN box needs access both to the sensors and the local network) if the external end of the VPN should become compromised. They may want to place your VPN device in a DMZ so it has no access to the rest of the internal network.</p> <p>This type of network setup has been used to compromise large organisations in the past. E.g. the Casino fish tank monitoring software and A large supermarket that had remote monitoring of it's fridges.</p>
|raspberry-pi|networking|hardware|wifi|
Is static IP needed if using VPN to connect to IoT devices?
3610
<p>I have an IoT solution that I need to connect to from remote. I don't have access to open ports nor set static IPs on-site where the IoT devices will be hosted so I was looking for other solutions. </p> <p>An Idea is to use a VPN-box where I connect the devices to, and then connect the VPN-box (router?) to the on-site ethernet. </p> <p>Would I then be able to connect to the devices from a raspberry pi (stored off-site) and would be able to access them with internal IPs since it is on the same VPN network?</p> <p>In theory this sounds like a plausible solution – but I am not sure if I am missing something. So I am looking for an answer if this is even possible before purchasing VPN routers and licenses. </p> <p>Would it work? Is there an even better solution for this?</p> <p>The IoT devices are two I/O remotes (Moxa) which I am connecting to via python using pymodbus. </p>
2018-11-17T15:06:20.850
<p>Keeping the device muted when not in use removes a huge amount of the benefit of a voice assistant, if you have to get up, walk across the room and unmute it to use it you might as well just install the app (Alexa app or an Android with Google Assistant) on your phone and only launch it when you want it.</p> <p>If you want a dedicated device that doesn't use the cloud then there are projects to roll your own (e.g. <a href="http://jasperproject.github.io/" rel="nofollow noreferrer">http://jasperproject.github.io/</a>) but remember that the benefit of the cloud are:</p> <ul> <li>The huge amount of training data which means that it's voice matching gets better and better all the time. A offline version will only be as good as the initial model it is loaded with at the time it's deployed.</li> <li>Other people write all (most) of the integrations/skills for you. </li> </ul>
|security|amazon-echo|google-home|privacy|cloud-computing|
Privacy with Voice Assistants
3619
<p>I am interested in getting a Smart Home system with a good voice assistant, but my wife refuses to allow a voice assistant like Google Home or Amazon Alexa because she does not want ANY recordings from our home being stored in the cloud. </p> <p>For example, this link discusses how Alexa stores requests in the cloud for machine learning of the Alexa system.</p> <p><a href="https://iot.stackexchange.com/questions/357/is-the-amazon-echo-always-listening-and-sending-data-to-the-cloud">Is the Amazon Echo &#39;always listening&#39; and sending data to the cloud?</a></p> <p>Is there any option here? Is there a smart home system that does not store recordings off-site? Any good options to ensure privacy and security with these systems?</p> <p>She's also worried about the ability of the government to eavesdrop. The above link quotes Intellihub: "Echo ... can be easily hacked and used by government agencies like the FBI to listen in on conversations."</p> <p>One respondent suggests using the Mute on the bundled remote when not in use: is that sufficient, or could a good hack bypass the Mute request?</p> <p>Bottom-line: do I have to give up some Privacy &amp; Security to get a Smart Home system?</p>
2018-11-21T13:04:54.857
<p>The code that handles the <code>max_queued_messages</code> is <a href="https://github.com/eclipse/paho.mqtt.python/blob/master/src/paho/mqtt/client.py#L1168" rel="nofollow noreferrer">here</a></p> <pre><code>if self._max_queued_messages &gt; 0 and len(self._out_messages) &gt;= self._max_queued_messages: message.info.rc = MQTT_ERR_QUEUE_SIZE return message.info </code></pre> <p>This looks like it does not bump any messages out of the queue and the it is up to you to handle storing this new message if you still want to keep it.</p> <p>The code for the <code>max_inflight_messages</code> is <a href="https://github.com/eclipse/paho.mqtt.python/blob/master/src/paho/mqtt/client.py#L1178" rel="nofollow noreferrer">here</a>.</p> <p>This will queue a message if there are currently too many inflight messages.</p> <p>Since the message queue test is done first there will always be room to queue the new message if the inflight limit is hit.</p>
|mqtt|paho|
MQTT messages hit queued or inflight limits, is it stated somewhere it's the oldest messages that are dropped?
3635
<p>I am learning MQTT in python and the protocol for QOS = 1 and 2. I'm concerned about my Raspberry Pi getting too bogged down, or if there are other unexpected problems at the server. I can see there are limits for queued and inflight messages as shown below. </p> <p>So far I haven't been able to read about or understand what happens to my newest published message if these queues hit either of their limits. I would guess that the newest message gets top treatment and the oldest are dropped when a limit is reached, but I can't find that stated explicitly.</p> <p>Is that in fact what will happen?</p> <p><em>"Bonus points:"</em> Is there a way to learn which Message IDs are still held in the queue?</p> <hr> <pre><code>import paho.mqtt.client as mqtt client = mqtt.Client("I am your client") </code></pre> <p>Reading about qos 1 and 2 <code>help(client.max_queued_messages_set)</code> yields </p> <blockquote> <p>Set the maximum number of messages in the outgoing message queue. 0 means unlimited.</p> </blockquote> <p>and</p> <p><code>help(client.max_inflight_messages_set)</code> yields</p> <blockquote> <p>Set the maximum number of messages with QoS>0 that can be part way through their network flow at once. Defaults to 20.</p> </blockquote>
2018-12-04T18:09:05.917
<p>The correct phrase is not pressure, but external force applied on top of the sensor. </p> <p>One way to verify (if you can afford) is to actual test that external load on its readings. </p> <p>Other way is to install it in a casing that will absorb external load. This casing needs to be with holes to able temp and humidity go through.</p>
|sensors|
Sensors under increased pressure
3668
<p>I am wondering how the DHT 22 (measures humidity and temperature) sensor would work under double or triple atmospheric pressure (rough calculations have been done but it will vary and the exact density being dealt with is not currently known).</p> <p>The sensor is being considered to be used for a project I am involved with where the temperature of grain in a bin would be monitored to help prevent the spoiling of the grain. </p> <p>The project's small GitHub is located here <a href="https://github.com/PhysicsUofRAUI/binTempSensor" rel="noreferrer">https://github.com/PhysicsUofRAUI/binTempSensor</a> , I don't think any more useful information is there but maybe I'm wrong.</p> <p>The project has a few different ways to communicate (only a few may eventually be made), but it will all be done at atm pressure so it is irrelevant in this question.</p> <p>I am also curious in general about how sensors would act in these situations, so if anyone has any knowledge they want to share that'd be awesome.</p> <p>Would it cause parts of it to break? Could other adverse things happen?</p> <p>Cheers and thanks!</p>
2018-12-11T15:03:13.917
<p>I'm guessing your best bet is actually using BLE beacon and notifications. Why? Because <a href="https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BackgroundExecution/BackgroundExecution.html" rel="nofollow noreferrer">there are very few things that can wake iOS from backgrounded mode</a>, and the list is even more limited if you can't use internet. Beacon and location services is one of them. </p> <p><a href="https://medium.com/arkulpa/ios-stay-connected-to-an-external-ble-device-as-much-as-possible-699d434846d2" rel="nofollow noreferrer">Here's a good article</a> that talks about how to do this over BLE with all the gotchas. But basically you have to implement iBeacon service, ensure it's correctly waking the phone from even when the app isn't running. This will give you the best battery life.</p> <p>It also flips the mechanism from phone-initated poll, to device initiated notification.</p> <p>In theory, using <code>external-accesory</code> framework and MFI is possible, but it's far more challenging to implement. Implement on the device-side, that is. If you somehow do get your system to act as HomeKit accessory, then Accessory framework <a href="https://developer.apple.com/library/archive/featuredarticles/ExternalAccessoryPT/Articles/MonitoringEvents.html" rel="nofollow noreferrer">registerForLocalNotifications lets you monitor events</a> while backgrounded.</p> <p>For turning your system into a HomeKit-like device, you might try <a href="https://github.com/nfarina/homebridge" rel="nofollow noreferrer">HomeBridge</a> to prototype it.</p>
|ios|lan|
Near real-time LAN device status without polling
3682
<p>I must develop an iPhone app to monitor a device.</p> <p>The device is a boat light control system. Using the iPhone app I can control lights on/off. Lights can be turned on/off from the device touchscreen or using the iPhone app.</p> <p>It's important that the application is notified of lights status change (someone has turned on a light form wall interrupter or from the device touchscreen). But is important that device/iPhone status update procedure is power optimized (I don't want to drain the iPhone battery using polling to continuously read status from the device). It's not a problem to add a Bluetooth to the device if this can optimize the system.</p> <p>Device and App are on the same LAN (the solution must work without internet connection).</p> <p>The device expose a rest service to read/change lights on/off status.</p> <p>In the app I need to display in near-real-time the device status changes.</p> <p>Are there any better solution for the device to notify the iPhone app a status change than do a call every n seconds by the App for checking?</p> <p>A kind of local device->iPhone notification? Maybe some feature of homekit may came in help?</p>
2018-12-15T11:45:21.140
<p>If the phone is new enough to run a version of Android with full Google Assistant then it should work for most things.</p> <p>The main points that differ will be:</p> <ul> <li>The microphone on the phone won't be nearly as good as the microphone array in the dedicated Google Home device. So voice recognition across the room will likely be not as good.</li> <li>You won't be able to set up things in rooms, e.g. Say "OK Google, turn on the light" and have it know that a particular bulb is grouped with a particular Home device rather than saying, "OK Google, turn on the bedroom light"</li> <li>You can't add the phone to a speaker groups (you can add Chromecasts to speaker groups) to allow for multi room audio.</li> </ul> <p>Sounds quality from a Google Home mini isn't bad (definitely way better than a phone speaker)</p>
|smart-home|google-home|google-assistant|android|smart-assistants|
Can an Android phone do the same things as Google Home (Mini)?
3695
<p>I'm considering buying a Google Home Mini, but it's unclear to me if it can actually do things that my smartphone can't do? </p> <p>I currently only have one "smart" light bulb, and I figure I'd mostly use a Google Home Mini to set my morning alarm, get news and basic things like that. But with time it would be nice to be able to turn on lights and other things by voice command. I can't really see that I would play music on such a device, so sound quality is not an issue.</p> <p>So my question is - could I just as well use an Android phone to do the same things as a Google Home Mini? I use on as my regular phone, and I also have an old one laying around that could be plugged in and basically only used for this purpose. </p>
2018-12-16T14:11:28.627
<p>Switching it on and off probably won't help as the configuration is probably saved into non-volatile memory. You do not want to reconfigure your device after every blackout or such.</p> <p>Your other chanse would be the hard reset feature added in release 1.6.7 but that would require a button and as per the <a href="https://github.com/xoseperez/espurna/blob/dev/code/espurna/config/hardware.h" rel="nofollow noreferrer">espurna/config/hardware.h</a> file, the AI-Thinker AI Light does not have a button defined by default.</p> <pre class="lang-c prettyprint-override"><code>// ----------------------------------------------------------------------------- // AI Thinker // ----------------------------------------------------------------------------- #elif defined(AITHINKER_AI_LIGHT) // Info #define MANUFACTURER "AITHINKER" #define DEVICE "AI_LIGHT" #define RELAY_PROVIDER RELAY_PROVIDER_LIGHT #define LIGHT_PROVIDER LIGHT_PROVIDER_MY92XX #define DUMMY_RELAY_COUNT 1 // Light #define LIGHT_CHANNELS 4 #define MY92XX_MODEL MY92XX_MODEL_MY9291 #define MY92XX_CHIPS 1 #define MY92XX_DI_PIN 13 #define MY92XX_DCKI_PIN 15 #define MY92XX_COMMAND MY92XX_COMMAND_DEFAULT #define MY92XX_MAPPING 0, 1, 2, 3 </code></pre> <p>There is no such thing on the <a href="http://wiki.jackslab.org/Noduino_OpenLight#Schemmatics" rel="nofollow noreferrer">schematic</a> either.</p> <p>All in all you will need a re-flash, either to reset the configuration or to upload a new firmware with a button defined on one of the free GPIOs of the ESP.</p>
|smart-lights|
How to reset a ESPurna LED bulb?
3700
<p>I have a LED smart bulb (probably a <a href="https://github.com/xoseperez/espurna/wiki/Hardware-AI-Thinker-AI-Light" rel="nofollow noreferrer">ESP8255 based one</a>) flashed with <a href="https://github.com/xoseperez/espurna" rel="nofollow noreferrer">ESPurna</a>. It was configured to connect to my WiFi network and it does, but unfortunately I forgot the password to connect to it (it presents a Basic Authentication login popup).</p> <p><a href="https://github.com/xoseperez/espurna/issues/87" rel="nofollow noreferrer">I have hope</a> that a hard reset would bring it back to factory settings but I do not know how to perform that hard reset.</p> <p>I tried to switch it on (for 4-5 seconds) and off a few times, as read somewhere but it did not do the trick.</p> <p><strong>Is there a standard (or at least expected) way to shortcut some PINs in order to simulate a "reset button press"?</strong> (I really, really would like to avoid reflashing it because of the tricky soldering)</p>
2018-12-23T10:15:02.040
<p>There are devices made specially for dimming using only on-off impulses to set the brightness. A common pattern is to use a short on/off signal for on/off, and a longer on/off signal to increase brightness in n % steps.</p> <p>So provided you can make the Sonoff switch on and off sufficiently fast, and reliably to get the timing right, adding a <a href="https://www.reichelt.at/electronic-impulse-switch-with-dimmer-fin-15-51-8-230v-p67143.html?GROUPID=3395&amp;OFFSET=16&amp;SID=10V25O9KwQATIAAHA64hU1ca524c0f4ba06c565a3fdb9bddaf8c6&amp;LANGUAGE=EN&amp;" rel="nofollow noreferrer">device like this</a> or any other similar impulse switch with dimmer should work.</p> <p>(Note that I'm not suggesting to pulse-width modulate it - the shortest impulse necessary is ~0,5 sec. The output remains as set, until it receives the next command signal).</p>
|mqtt|esp8266|home-assistant|node-red|sonoff|
Controlling normal bulbs brightness using Sonoff devices
3714
<p>I am doing a home automation project in which I should be able to control the lights on/off but more than that, the brightness of the lights.</p> <p>I am using Home Assistant (not hassbian) and Node-Red along with MQTT.</p> <p>I am using a normal bulb I purchased from a hardware store connected to a Sonoff ESP8266 and am able to use Node-RED to trigger a on and off state but am unsure how to trigger the specific brightness level. </p> <p>Is it possible to control the brightness of any normal light that is made into a 'smart light' through the use of Sonoff? Or must they be devices that have that functionality made into the light itself.</p> <p>(same for any other appliances, such as things such like a normal portable fan made into a smart fan through ESP8266 and controlling its speed.)</p>
2018-12-27T23:03:36.660
<p>It appears some research has been done on this already – <a href="http://bayen.eecs.berkeley.edu/sites/default/files/journals/sensing_by_proxy.pdf" rel="noreferrer">Sensing by Proxy: Occupancy Detection Based on Indoor CO<sub>2</sub> Concentration</a> describes a model developed at the University of California, Berkeley to detect occupancy based on CO<sub>2</sub> concentration. </p> <blockquote> <p>We propose a link model that relates the proxy measurements with unknown human emission rates based on a data-driven model which consists of a coupled Partial Differential Equation (PDE) – Ordinary Differential Equation (ODE) system.</p> </blockquote> <p>Their model is apparently more accurate than other machine learning models they tested:</p> <blockquote> <p>The inference of the number of occupants in the room based on CO2 measurements at the air return and air supply vents by sensing by proxy outperforms a range of machine learning algorithms, and achieves an overall mean squared error of 0.6569 (fractional person), while the best alternative by Bayes net is 1.2061 (fractional person).</p> </blockquote> <p>Algorithm 1 (p. 3) in the paper may give some direction on how to implement a similar system to theirs, which seems to be surprisingly reliable given the simplistic nature of the CO<sub>2</sub> sensor.</p>
|smart-home|sensors|system-architecture|
Is it possible to use a CO2 sensor to detect how many people are in a room?
3730
<p>I have <a href="https://www.winsen-sensor.com/products/ndir-co2-sensor/mh-z14.html" rel="noreferrer">MH-Z14 Carbon Dioxide sensor</a> and have been using it to try and detect when a room may need some fresh air. But, I've also noticed that the sensor reading drastically increases when a human is present in a room and especially if close to the sensor itself.</p> <p>I am wondering if anyone tried to use the current CO2 value in a room to detect an approximate number of people in a room and how possible and accurate could it be? </p>
2018-12-30T17:50:30.423
<p>After fiddling around I figure out that it looks like you have to send the device state after every <code>SetBinaryState</code> request.</p> <p>So, if the ESP node receives a <code>SetBinaryState</code> command, it will have to answer with a <code>GetBinaryState</code>. </p> <pre><code>void Light::_handleCommands() { Serial.println("Responding to /upnp/control/basicevent...1"); String request = server-&gt;arg(0); if (request.indexOf("SetBinaryState") &gt;= 0) { if (request.indexOf("&lt;BinaryState&gt;1&lt;/BinaryState&gt;") &gt;= 0) { Serial.println(" - Got Turn on request"); lightStatus= turnOnLight(); sendLightStatus(); } if (request.indexOf("&lt;BinaryState&gt;0&lt;/BinaryState&gt;") &gt;= 0) { Serial.println(" - Got Turn off request"); lightStatus= turnOffLight(); sendLightStatus(); } } if (request.indexOf("GetBinaryState") &gt;= 0) { Serial.println(" - Got light state request"); sendLightStatus(); } server-&gt;send(200, "text/plain", ""); } </code></pre> <p>On top of that, it looks that the response has to 'make sense', ie. if a Turn On request is received, it has to return '1' in the <code>GetBinaryState</code>. Else Alexa will say that the device is not responding or it's malfunctioning.</p>
|alexa|amazon-echo|
ESP8266 - Device doesn't support that
3740
<p>I'm using and trying the new Echo Dot generation (3rd Gen) to control a ESP8266 following some tutorials. For now I just want to change the state of a relay using the <code>SetBinaryState</code> action. But after discovering the device, if I try to turn it on, I get the <strong>"Device doesn't support that"</strong> response.</p> <p>The tutorials I've found are for the previous generations, and the packets seem to be different, as I had to fix something in the discovery process. But I can't figure out where's the problem when setting the state as I haven't seen any documentation related to this. This is the <code>setup.xml</code></p> <pre><code>"&lt;serviceList&gt;" "&lt;service&gt;" "&lt;serviceType&gt;urn:Belkin:service:basicevent:1&lt;/serviceType&gt;" "&lt;serviceId&gt;urn:Belkin:serviceId:basicevent1&lt;/serviceId&gt;" "&lt;controlURL&gt;/upnp/control/basicevent1&lt;/controlURL&gt;" "&lt;eventSubURL&gt;/upnp/event/basicevent1&lt;/eventSubURL&gt;" "&lt;SCPDURL&gt;/eventservice.xml&lt;/SCPDURL&gt;" "&lt;/service&gt;" "&lt;/serviceList&gt;" </code></pre> <p>And these are the logs that I'm getting:</p> <pre><code>Received packet of size 94 From 192.168.1.3, port 50000 Request: M-SEARCH * HTTP/1.1 Host: 239.255.255.250:1900 Man: "ssdp:discover" MX: 3 ST: ssdp:all Responding to search request ... Sending response to 192.168.1.3 Port : 50000 Response sent! Received packet of size 101 From 192.168.1.3, port 50000 Request: M-SEARCH * HTTP/1.1 Host: 239.255.255.250:1900 Man: "ssdp:discover" MX: 3 ST: upnp:rootdevice Responding to search request ... Sending response to 192.168.1.3 Port : 50000 Response sent! Responding to setup.xml ... Sending :&lt;?xml version="1.0"?&gt;&lt;root&gt;&lt;device&gt;&lt;deviceType&gt;urn:Belkin:device:controllee:1&lt;/deviceType&gt;&lt;friendlyName&gt;Living room light&lt;/friendlyName&gt;&lt;manufacturer&gt;Belkin International Inc.&lt;/manufacturer&gt;&lt;modelName&gt;Emulated Socket&lt;/modelName&gt;&lt;modelNumber&gt;3.1415&lt;/modelNumber&gt;&lt;UDN&gt;uuid:Socket-1_0-38323636-4558-4dda-9188-cda0e616a12b&lt;/UDN&gt;&lt;serialNumber&gt;221517K0101769&lt;/serialNumber&gt;&lt;binaryState&gt;0&lt;/binaryState&gt;&lt;serviceList&gt;&lt;service&gt;&lt;serviceType&gt;urn:Belkin:service:basicevent:1&lt;/serviceType&gt;&lt;serviceId&gt;urn:Belkin:serviceId:basicevent1&lt;/serviceId&gt;&lt;controlURL&gt;/upnp/control/basicevent1&lt;/controlURL&gt;&lt;eventSubURL&gt;/upnp/event/basicevent1&lt;/eventSubURL&gt;&lt;SCPDURL&gt;/eventservice.xml&lt;/SCPDURL&gt;&lt;/service&gt;&lt;/serviceList&gt;&lt;/device&gt;&lt;/root&gt; Responding to /upnp/control/basicevent1... Request: &lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"&gt;&lt;s:Body&gt;&lt;u:GetBinaryState xmlns:u="urn:Belkin:service:basicevent:1"&gt;&lt;BinaryState&gt;1&lt;/BinaryState&gt;&lt;/u:GetBinaryState&gt;&lt;/s:Body&gt;&lt;/s:Envelope&gt; Responding to /upnp/control/basicevent1... Request: &lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"&gt;&lt;s:Body&gt;&lt;u:GetBinaryState xmlns:u="urn:Belkin:service:basicevent:1"&gt;&lt;BinaryState&gt;1&lt;/BinaryState&gt;&lt;/u:GetBinaryState&gt;&lt;/s:Body&gt;&lt;/s:Envelope&gt; </code></pre> <p>And then tons of these packets:</p> <pre><code>Received packet of size 278 From 192.168.1.1, port 1900 Request: NOTIFY * HTTP/1.1 HOST: 239.255.255.250:1900 CACHE-CONTROL: max-age=3000 Location: http://192.168.1.1:5431/igdevicedesc.xml SERVER: Router UPnP/1.0 miniupnpd NT: uuid:f5c1d177-62e5-45d1-a6e7-9446962b761e USN: uuid:f5c1d177-62e5-45d1-a6e7-9446962b76 </code></pre> <p>One thing I've noticed is that the <code>eventservice.xml</code> never gets called, but I'm not sure if that's correct. </p>
2019-01-04T14:10:39.677
<p>A workaround that I have just found today is that usually when the Mini asks, "is that percent one dollars, percent two dollars," it actually already knows the right person you wish to call so just answer, "YES!" It then goes ahead and makes the call I wanted to make to the right person I had requested it be made to!</p> <p>Also, if I specifically say, "OK Google, call my Mother AT HOME," it works! It seems that the only time gets stuck saying, "Didn't catch that. Is that percent one dollar, percent two dollars?" is when I say, "OK Google, call my Mother." </p> <p>It would appear that the "%$1" variable might stand for "Mother" and the "%$2" might stand for "HOME" in this scenario. It's just Google Home is not properly evaluating those variables and matching it with the words in my request. This would explain why it only spits out that cryptic, generic-sounding question, "Didn't catch that. Is that % one dollar, % two dollars?"</p>
|google-home|
Google Home Mini won't recognize contact and place a call, saying it, "didn't catch that--is that percent one dollars, percent two dollars?"
3749
<p>What can I do about my Google Home Mini not successfully placing calls I try to make to some of my contacts? It works for most contacts I tell it to call, but mysteriously some calls won't go through and the Mini gets stuck, telling me it, &quot;didn't catch that--is that percent one dollars, percent two dollars?&quot;</p> <p>After many months of searching the Internet for a fix, I still have the same issue. I have not tried any solutions because I've found none yet. It seems to happen to some contacts and not others. Mom and Dad, for example, have the same number listed in my contacts. Google Home finds Dad and calls him, but not Mom. Every time I try it saying &quot;call my Mom&quot;, it says, &quot;Didn't catch that. Is that percent one dollar, percent two dollars?&quot; Meanwhile, if I say &quot;call my Dad,&quot; it dials their home just fine. If I ask, &quot;what is my Mom's home phone number,&quot; it gives me the same number answer as when I ask, &quot;what is my Dad's home phone number.&quot; It's the right phone number! So it knows the right thing. Yet Google Home gets stuck when I try to call her and not him.</p> <p>As a developer, it is clear to me that the Google Home Mini is improperly evaluating some %$1 and %$2 variable references in a scripted prompt within a string variable in its code, but no one has done anything about it. The issue's been reported to Google at <a href="https://support.google.com/assistant/thread/414485?hl=en" rel="nofollow noreferrer">I asked Google Home Mini to call someone and it won't work</a>. Still, because of how the issue appears to be tied to the way unique contacts are set up in my address book, I think there ought to be a way that this could be corrected by us, the users--perhaps by changing the way our contacts are set up. I'm putting this out there in hopes that someone has come up with a solution!</p>
2019-01-06T01:32:09.577
<p>This is because the message payload is way bigger than will fit into a single TCP packet so the problem is that you are not starting the Paho client network loop which will chunk the message up and stream it out to the network.</p> <p>You have 2 choice.</p> <p>First start the network loop in the script. This is best if you are planning on sending multiple images.</p> <p>Secondly if you are just publishing 1 image then the paho client code has a <code>single</code> function to handle just that.</p> <p>Examples of how to use both can be found in the following Stack Overflow <a href="https://stackoverflow.com/questions/41334381/mqtt-will-not-publish-over-python/41334927#41334927">Answer</a>.</p>
|mqtt|raspberry-pi|paho|
Publish image data to MQTT not showing
3757
<p>I am trying to publish image data to MQTT (CloudMQTT) with following code, but the data is not appearing on MQTT, don't even see any data on MQTT broker.</p> <pre class="lang-python prettyprint-override"><code>import identity import json import paho.mqtt.client as mqtt import time import datetime import RPi.GPIO as GPIO import bme280 import picamera import base64 DEVICE_ID = identity.local_hostname() config = json.loads(open('config.json').read()) SERVER = config['mqtt1']['hostname'] PORT = int(config['mqtt1']['port']) USER = config['mqtt1']['username'] PASSWORD = config['mqtt1']['password'] def on_connect(client, userdata, flags, rc): if rc == 0: print(&quot;Connected to broker&quot;) else: print(&quot;Connection failed&quot;) def on_subscribe(client, userdata, mid, granted_qos): print(&quot;Subscribed: &quot; + str(message.topic) + &quot; &quot; + str(mid) + &quot; &quot; + str(granted_qos)) def on_message(client, userdata, message): print(&quot;message topic=&quot;,message.topic) print(&quot;message qos=&quot;,message.qos) print(&quot;message retain flag=&quot;,message.retain) client = mqtt.Client(DEVICE_ID) camera = picamera.PiCamera() camera.resolution = (1280, 720) client.username_pw_set(USER, password=PASSWORD) client.on_connect= on_connect client.on_subscribe= on_subscribe client.on_message= on_message client.connect(SERVER, port=PORT) client.loop_start() topics = [(&quot;DOWN/site01/pod02&quot;,2)] client.subscribe(topics) data = {'device_id':DEVICE_ID} PUBLISH_DATA = &quot;UP/&quot; + &quot;site01/&quot; + DEVICE_ID + &quot;/data&quot; PUBLISH_IMAGE = &quot;UP/&quot; + &quot;site01/&quot; + DEVICE_ID + &quot;/image&quot; try: while True: data['date'] = str(datetime.datetime.today().isoformat()) data['temperature'],data['pressure'],data['humidity'] = bme280.readBME280All() data['switch1'] = GPIO.input(14) data['switch2'] = GPIO.input(15) client.publish(PUBLISH_DATA,json.dumps(data)) camera.capture('image.jpg') image_data = open(&quot;image.jpg&quot;, &quot;rb&quot;) image_string = image_data.read() imageByteArray = bytes(image_string) client.publish(PUBLISH_IMAGE, imageByteArray, 0) time.sleep(10) except KeyboardInterrupt: camera.close() client.disconnect() client.loop_stop() GPIO.cleanup() </code></pre> <p>Anyone know what I am missing here?</p>
2019-01-07T07:31:29.190
<p>The band is a BLE (Bluetooth low energy) device, as such it will broadcast beacon packets at regular intervals (the BLE spec can configure this interval).</p> <p>These beacons are how devices (e.g. phones) know that they are in range and can then connect to then to get more data, but the beacons can also contain a small amount of data (e.g. BLE temp sensors, physical web URL beacons or iBeacons). Beacon only devices can run for years on just a coin cell.</p> <p>As for a detector, any thing with a BLE capable adapter can be used. A raspberry pi zero w is a good start for a prototype, a simple BLE beacon can be written as a shell script, with the <code>hcitool</code> command line tool or any number of other languages (e.g. Node-RED has a BLE beacon listener node).</p> <p>Each device will have a MAC address, but be aware that cheap devices may not be unique (I once bought 20 USB BLE dongles and found 5 with the same MAC address)</p>
|bluetooth|
How can I detect the presence of a Xiaomi Mi Band 3 (which uses BlueTooth)?
3765
<p>As you can see from <a href="https://hardwarerecs.stackexchange.com/questions/10368/wearable-wifi-with-long-battery-life-12-hours">my question</a> on h/w recommendations, I am trying to design an evacuation system for a chemical factory.</p> <p>That requires knowing which room each employee is in at any given time. I can handle the system to track the employees, but have been looking for a long time for a durable wearable with long battery life for each employee to wear or carry.</p> <ul> <li><p>I had considered Android 'phone, but theymight be too expensive/bulky/fragile/short battery life.</p></li> <li><p>A Raspberry Pi Zero W is cheap, but also quite large, needs a casing and I am unsure about battery life.</p></li> <li><p>Passive RFID might not have the range, and active requires battery.</p></li> <li><p>The <a href="https://learn.adafruit.com/adafruit-flora-bluefruit-le" rel="nofollow noreferrer">AdaFruit Flora BLE</a> looks interesting, but I can't find data about its battery life.</p></li> </ul> <p>The I had an epiphany when I looked at my wrist and saw the cheap fitness tracker on my wrist. It's a Xiaomi Mi Band 3 which I got free with my last 'phone. </p> <p>I am charging it about once every 3 weeks, although I currently do not turn BT. I will need to calibrate that, although reviews give it 7 days of heavy usage.</p> <p>So - finally - to the question: how can I detect transmissions from the device? If they are frequent enough (say, more than once per minute), then it doesn't matter what the signal is, so long as I can get a MAC address out of it and use that to locate the device.</p>
2019-01-07T09:43:50.277
<p>In the Internet of Things, you have Smart Objects and not-Smart Objects.</p> <p>The <strong>not-Smart Objects</strong> are sensor and actuators. <strong>Sensors</strong> allow you to obtain different measures from the environment: the light fluctuation using a photoresistor, the temperature using a thermistor, to detect flames, sounds, movements, or any other fluctuation in the environment. The <strong>actuators</strong> can do an action. Examples of mechanic actuators could be motors, servomotors or hydraulic bombs, and examples of actions could be to send a message, control LEDs, turn on lights or control the movement of a robot or any other available robot’s actions.</p> <p>On the other hand, you have the <strong>Smart Objects</strong>. These one are composed of not-Smart Objects, maybe only sensor, maybe only actuators, or a combination of both. However, Smart Objects also have the capacity to think, because they have a processor. For instance, a smartphone, a microcontroller like an Arduino, etc.</p> <p>You can see this in one of the existing definitions: '<em>A Smart Object, also known as Intelligent Product, is a physical element that can be identified throughout its life and interact with the environment and other objects. Moreover, it can act in an intelligent way and independently under certain conditions. Furthermore, Smart Objects have an embedded operating system and they usually can have actuators, sensors, or both. This allows Smart Objects to communicate with other objects, process environment data, and do events</em>'.</p> <p>Besides, we can classify the Smart Objects according to the different level of intelligence that they can have. They have three levels of intelligence, three levels of the location of intelligence, and three levels of the aggregation of this one.</p> <p>Here you can see a better explanation with more details and the classification in: <a href="https://www.researchgate.net/publication/307638707_A_review_about_Smart_Objects_Sensors_and_Actuators" rel="nofollow noreferrer">https://www.researchgate.net/publication/307638707_A_review_about_Smart_Objects_Sensors_and_Actuators</a></p> <p>Here you have an explanation that I did before about the Internet of Things: <a href="https://iot.stackexchange.com/questions/1246/whats-the-difference-between-the-internet-of-things-and-the-traditional-interne/1249#1249">What&#39;s the difference between the Internet of Things and the traditional Internet?</a></p>
|smart-home|sensors|
Dumb sensor vs smart sensor
3768
<p>Everytime when i read about the "Internet of Things" it is suggested that devices are smart but when it comes to an implementation of an IoT ecosystem, it is not so clear to me anymore. So i need some help for explanation of the term "Internet of things" and "Smart devices"</p> <p>For me, there are two cases</p> <ol> <li><p>Devices are very smart. Example: The classical smart fridge which orders milk if its empty. The fridge is able to perform a order by itself and does not need any help from intermediate logic between the device (fridge) and the webshop.</p></li> <li><p>Devices are dumb as much as possible. The fridge does not know if the milk is empty. It just know that sensor A ("milk detector") is status "empty" or "not empty". Or just have a sensor B which counts the pieces of milkbags in the fridge and push this to a queue or a REST Service. This Service is an intermediate layer between other systems like webshops or other devices.</p></li> </ol> <p>Does "smart device" even mean that itself is able to evaluate its own data and therefore is able to perform actions without any external help?</p> <p>I can't imagine that a simple light bulb should be able to go to my webshop and order itself automatically ... or is this exactly that for what "Internet of things" is standing for?</p> <p>For 1. i can see the main drawback is that you have to reprogram the fridge if it should have a different behaviour (order new milk if count of milk &lt; 2 instead of &lt; 1). The upside is, that you don't need any additional layer between webshop and device.</p> <p>For 2. the main drawback is the additional logic layer between webshop and device. But it is easy to modify the smart behaviour in a central way on a common platform where all "smart devices" are registered.</p>
2019-01-07T17:44:07.267
<p>Congratulations on writing a research paper on it.</p> <p>This can be done with streaming platforms such as kafka or AWS KDA (kinesis data analytics). There are clauses in the query language in KDA (and also in KSQL, I think) which will allow you to link the events that happen very close together.</p> <p><a href="https://aws.amazon.com/kinesis/data-analytics/" rel="nofollow noreferrer">https://aws.amazon.com/kinesis/data-analytics/</a> <a href="https://docs.aws.amazon.com/kinesisanalytics/latest/dev/windowed-sql.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/kinesisanalytics/latest/dev/windowed-sql.html</a></p> <p>If all you're looking for is to get the last known sensor values when an RFID is read, you can do it very simply in just AWS IoT Rules. Let the sensors write to a dynamodb table (using an IoT rule) and let the RFID reader pick up values from the dynamodb. You'll just need a way to associate the two together. That can just be another dynamodb table that is first looked at in a rule that gets an rfid reading (along with reader id) and uses the reader id to look up the associated sensors. Then, a second rule executes that picks up the last known values from those sensors. The end result can then be used to write to a queue, etc.</p>
|sensors|system-architecture|web-services|rfid|web-sockets|
Pushing Processed data from RFID + Sensors to a Platform
3770
<p>I have some sensor nodes (Bosch XDKs) that send information to an MQTT broker and an application reads the information and stores it into InfluxDB. Simultaneously, I have RFID readers that scan some tags and send the information to a MongoDB. Based on some logic I wish to merge the data from InfluxDB and MongoDB together and would like to send to a custom platform which would have its own Database where the <em>clubbed</em> information will be stored to be visualized.</p> <p><a href="https://i.stack.imgur.com/KhvYI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KhvYI.png" alt="Basic Idea"></a></p> <ul> <li><p>The information flow is from sources of data is uni-directional (i.e. from sensors and RFID is only to Databases)</p></li> <li><p>The <em>Combine</em> service can query the databases and combine the information and send it to the platform</p></li> </ul> <h3>Requirement</h3> <ul> <li>I wish to push <strong>real-time</strong> information to the platform. That is, whenever the MongoDB instance is updated, use some parameters to query the information from InfluxDB, Combine and send push it to the platform.</li> </ul> <p>What I a unable to grasp is whether to use <code>webhooks</code> or <code>websockets</code> for the <em>Combine</em> App and also how is it possible to know if MongoDB was updated with some new information?</p> <p>I read a good <a href="https://stackoverflow.com/questions/23172760/differences-between-webhook-and-websocket">SE Query for Webhooks v/s Websockets</a> and found that the scenario is more in the direction for <code>server-to-server</code> (webhooks) where the client interaces only to the platform via its internal services and not direclty via the two DBs but I am not sure if this is the case.</p> <p>I see this as a very relevant usage of IoT Real-Time data application and am looking for some clear architecture and implementation answers for the mentioned case above.</p> <p>Are there repositories that somehow provide help as to how to create webhooks/websockets that fulfill my requirements?</p>
2019-01-09T08:04:36.470
<p>MQTT brokers are not HTTP servers, you can not POST to broker, it just won't work.</p> <p>MQTT and HTTP are two totally different protocols, if you want to bridge them you will need to write program to do that. Doing so in Node-RED is trivial.</p> <pre><code>HTTP-in set to receive POSTs --&gt; MQTT-out to publish to broker | --&gt; HTTP-response node (to close out the HTTP session) </code></pre> <p>You can set the topic to publish to in the HTTP-in node, or insert a function node between HTTP-in node and the MQTT-out node to set topic and qos settings if you need to.</p>
|mqtt|mosquitto|web-services|
POST messages in JSON to mosquitto MQTT not being received from node-RED mqtt node
3780
<p>Not sure how much better I can phrase it in the question but i'll try to explain to the best I can.</p> <p><strong>What I am trying to do:</strong></p> <ol> <li><p>Using IFTTT Webhook to <code>POST</code> a JSON message in MQTT Format.</p> <p>POST URL: <code>test.mosquitto.org</code><br /> E.G:</p> </li> </ol> <pre class="lang-json prettyprint-override"><code> { &quot;payload:&quot;: &quot;kitchen&quot;, &quot;topic:&quot;: &quot;device/state&quot;, &quot;retain:&quot;: true, &quot;qos&quot;: 2 } </code></pre> <ol start="2"> <li>Using a Node-RED <code>mqtt input (subscribe) node</code> with the settings configured to <code>test.mosquitto.org</code> port 1883 and topic set to <code>device/state</code>, I should be able to retrieve the payload I've published to the mosquitto broker to my Node-RED node.</li> </ol> <p><strong>What went wrong</strong><br /> I think something might be wrong with the <code>POST</code> to the <code>test.mosquitto.org</code> broker.</p> <p><strong>Troubleshooting</strong><br /> By using <code>mosquitto_pub</code> and <code>mosuqitto_sub</code> commands, I'm able to to receive the payload in my Node-RED (which means that my Node-RED mqtt node is configured correctly).</p> <p>Commands:<br /> Terminal 1: <code>mosquitto_sub -h test.mosquitto.org -t device/state -d</code><br /> Terminal 2: <code>mosquitto_pub -h test.mosquitto.org -t device/state -m &quot;kitchen&quot;</code></p> <p>Node-RED successfully receives the message in JSON object format. But it doesn't receive anything when I attempt to publish through Postman using <code>POST</code> method to the URL.</p>
2019-01-10T13:17:03.427
<p>Alexa has updated their software to directly support garage openers. Now I can hook up the garage opener through Hubitat as a garage opener (no more having to trick things using virtual devices) and Alexa sees the garage opener as a garage opener (no more tricking).</p> <p>This allows her to directly open and close the garage opener and report back its state. I can now say &quot;Alexa is garage opener open?&quot; And she will reply correctly.</p>
|alexa|
Can Alexa determine a switch state?
3784
<p>I have a virtual switch set up through Hubitat that actually opens and closes my garage door. I can say "Alexa open Pete" and she will turn the virtual switch on which opens the garage door. Also I can say "Alexa close Pete" to close the garage. The problem I am having is if I ask her "Alexa is Pete open?" I want her to determine the status of the switch. I am even ok if she responds "Pete is on" instead of "Pete is open" but I can't get that far even. </p> <p>I have tried (based on searches of things to try)</p> <ul> <li>Assigning the switch to a room called garage and saying "Alexa is Garage Pete on / open?"</li> <li>Saying "Alexa what is status on Hubitat of Pete?"</li> </ul> <p>The problem seems like it should be pretty simple as it is really determining if a light / switch is on or off but I can't figure how for her to report the status / state of a device at all. Is that something she doesn't support?</p> <p>FYI if you didn't figure out, Pete is the name of my virtual switch for the garage door :)</p>
2019-01-10T15:41:09.897
<p>The answer is that the device connects out to the control server and maintains this connection.</p> <p>The control server then uses this connection to forward requests.</p> <p>Typically messaging protocols like MQTT are used as they designed to be used over a long running connection initiated by the client.</p> <p>This approach also solves the problem that the controller would need to know where device was. Given most home internet connections use dynamic IP addresses (if not even worse CGNAT) this would mean that the device would need a way to update the control server every time the external IP address changed (which it would not easily be able to determine). Port forwarding also needs either the owner to be technically capable enough to set it up, or UPnP to be enabled (which is probably a bad idea in the current security landscape). Port forwarding also limits the number of devices that can be deployed behind a given router.</p> <p>Also see my answer to how smart plugs work: <a href="https://iot.stackexchange.com/questions/3752/how-do-smart-plugs-of-domotic-iot-work/3755#3755">How do smart plugs of domotic IoT work?</a></p>
|smart-home|networking|communication|protocols|cloud-computing|
How Smart Home devices communication works
3785
<p>I have a big doubt that I'm not able to solve.</p> <p>Basically, I want to understand how the smart device that I have in my home communicate with their servers, or better, how their servers can call its and sending commands (like turn on/off) without any type of port forwarding.</p> <p>I know that if I wont to access to a device remotely I need to expose it on the network through a port forwarding on my router, but, I didn't configured anything for these devices, so, what kind of method they use?</p> <p>Could someone let me know?</p>
2019-01-10T08:58:39.730
<p>A Pi 3b is a very capable system, a quad core 1.2GHz Arm CPU with 1GB of RAM.</p> <p>It should be more than capable for what you are planning, but with all these things it will depend on exactly what you intend to do.</p> <p>Node-RED is basically a programming environment, so it's not possible to say how much resource it will consume without knowing the program (flow) you are going to run on it. (But you can say it will never consume more than 1 core since it's a NodeJS app and as such single threaded)</p> <p>You will have to assemble your system and test it to see how it behaves.</p> <p>The good news is that you should be able to easily move the MQTT broker and reverse proxy to a separate pi simply if (in the very unlikely event) the load becomes too much.</p>
|smart-home|raspberry-pi|
Will my Raspberry Pi home automation hub be able to do all that?
3788
<p>I am setting up a DIY home automation, and just planning how my solution will look. I will have a Raspberry Pi 3 as the "hub" of the network It will run Node Red, MQTT (mosquitto), a DotNet website, database, and reverse proxy server, and possibly a few other things</p> <ul> <li>Node Red: brains of the operation</li> <li>MQTT: To do the heavy lifting talking to wireless IoT things (probably many of which will use <a href="https://github.com/arendst/Sonoff-Tasmota" rel="nofollow noreferrer">Sonoff-Tasmota</a>, or my own custom firmware)</li> <li>DotNet website/database: Gives me a programmable interface for internal/external facing tasks (may not be required if i can do it all with Node-Red, but not that confident with NR yet)</li> <li>Reverse proxy service: SSL termination, security, possibly authentication (again, node-red may have me covered here)</li> </ul> <p>I plan to have somewhere in the vicinity of 30-50 devices on the network, most accessed via MQTT, some via HTTP.</p> <p>The question: Will running all of that on a single Raspberry Pi 3b "overload" the system? Am I better off splitting responsibility across 2 Pis (and if so, what is the best logical grouping)?</p> <p>Further, are there any issues with thrashing the SD card in the Pi(s), or should I attach an SSD/HDD? </p>
2019-01-18T17:45:38.713
<p>Yes, explicitly so, albeit it will be relatively slow. </p> <p>Carriers are actually recommending NB-IoT and Cat-M1 combo modules, and dynamically switching to M1 when FOTA is needed, although this will only make sense for some apps.</p>
|over-the-air-updates|
Are Over-the-Air Updates possible with NB-IoT
3811
<p>I'm trying make an IoT device which I want be a highly power saving one &amp; want it to work in a remote static environment. So I plan to make a circuit with a "SARA" NB IoT module. Now I want to know if FOTA is possible with NB_IOT.</p>
2019-01-20T19:24:44.853
<p>They don't claim to be made by SmartThings Inc. They just sell it.</p> <p>The second one has an older FCC ID <a href="https://fcc.report/FCC-ID/T3L-SS014" rel="nofollow noreferrer">T3L-SS014 </a> (2015) than the other one <a href="https://fcc.report/FCC-ID/2AF4S-STS-IRM-250" rel="nofollow noreferrer">2AF4S-STS-IRM-250 </a> (2016). Neither has been manufactured by Samsung or the SmartThings subsidiary. They come from CentraLite Systems, Inc and SAM JIN CO., LTD respectively. The former also used a factory or a partner in Mexico to produce it. Of course, that doesn't say anything about when they were actually produced.</p> <p>Also the upper one is missing the European CE logo, so it could just be a US only version. Furthermore, SmartThings was only acquired by Samsung in late 2014 so maybe the only started to go out with <em>Samsung SmartThings</em> with the later model.</p> <p>So the original SmartThings Inc before being bought probably had a partner in Mexico and Samsung shifted the production to a Korean partner/subsidiary — I'm not gonna investigate Korean Samsung corporate structures — after they bought it.</p> <p>Long story short, doesn't look fishy to me.</p>
|samsung-smartthings|
Which one is the fake? Samsung SmartThings motion detectors
3821
<p>I bought these at different times, but thought I was getting the same thing.</p> <p>Which one is the fake? And if neither is a fake, which one is newer and/or better? </p> <p>The first one uses a CR2477 battery, almost twice as thick as the CR2450: <a href="https://i.stack.imgur.com/FZyRR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FZyRR.jpg" alt="Sensor 1 + battery"></a></p> <p>Both claim to be made by SmartThings, Inc, but the second one also includes a SmartSense trademark, and uses the thinner CR2450 battery:</p> <p><a href="https://i.stack.imgur.com/wkxpR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wkxpR.jpg" alt="Sensor 2 + battery"></a></p>
2019-01-24T08:21:57.740
<p>There is this 6tisch simulator that implements the whole protocol stack. It's written in Python. Might help.</p> <p><a href="https://bitbucket.org/6tisch/simulator" rel="nofollow noreferrer">https://bitbucket.org/6tisch/simulator</a></p> <p>Cheers.</p>
|6lowpan|contiki|
LLN and RPL Simulator?
3829
<p>I'm working on a project which is about designing an IDS for RPL Protocol. In this project I have to simulate different types of attacks (e.g. Black hole Attack) and RPL Protocol. I saw on the Internet that Contiki OS and Cooja simulator have not been upgraded for 4 years. Do you have an alternative you can suggest? Any Comments would be appreciated.</p>
2019-01-26T23:26:58.987
<p>You are correct about the microphone input, <code>void my_record()</code> samples the microphone output level 1000 times, appending each reading to a string variable and publishes the resulting string to an MQTT broker.</p> <p>This process repeats 11 times every time my_record() is called.</p> <p>note: You are sort of misunderstanding about the .RAW file. It is a raw file, meaning that it is unprocessed and unformatted .... just a stream of bytes. Using the term .RAW implies a file name extension.</p> <p><code>What command allows MQTT server to generate the .RAW file?</code></p> <p>The MQTT broker (server) does not generate the raw file, the file is published to the MQTT broker by an outside source, the ESP8266 in this instance.</p> <p><code>So how does the MQTT server know which published transmission belongs to which RAW file?</code></p> <p>It does not know. All it does is to relay messages. It is up to the <code>publisher</code> to send to the correct <code>topic</code> and it is up to the <code>subscriber</code> to watch the data at the correct topic.</p> <p>The messages could arrive to the subscriber out of sequence, so a sequence number needs to be included with the message if a correct data sequence is desired.</p> <p>Have a look at these for a visual demo of MQTT messages.</p> <p><a href="https://shiftr.io/try" rel="nofollow noreferrer">https://shiftr.io/try</a> ... you can publish to this one (and subscribe)</p> <p>You can get your own account and watch your own messages being sent and received without the clutter of other messages.</p>
|mqtt|esp8266|
How does the MQTT server output a single raw file from multiple client publications?
3838
<p>I'm trying my first IoT project whereby I want to:</p> <ul> <li>have an electret microphone capture audio</li> <li>have an ESP8266 NodeMCU 12-E board submit captured audio to a remote server</li> <li>have a remote server receive the audio data using an MQTT server</li> <li>publish the audio data as a .WAV file on the server</li> </ul> <p>I saw someone online demonstrate something similar in this YouTube video </p> <p><a href="https://www.youtube.com/watch?v=rU_Pw9Jb_PM" rel="nofollow noreferrer">https://www.youtube.com/watch?v=rU_Pw9Jb_PM</a></p> <p>The author shared the project on github here</p> <p><a href="https://github.com/hjltu/esp8266-wifi-microphone" rel="nofollow noreferrer">https://github.com/hjltu/esp8266-wifi-microphone</a> </p> <p>When I study the code, I think what I see is the author taking the value of <code>analogRead(A)</code> and appending it to some kind of string as a payload, which is then published to an MQTT server.</p> <p>I see that the author expects the MQTT server or some other software to process the ESP8266 microphone audio data and output it as a .RAW file. This RAW file is eventually converted to a .WAV file with the help of ffmpeg. </p> <p>My question is this: What command allows MQTT server to generate the .RAW file? Or is this done by an entirely different software? And it appears to me that for a single recording/audio file, the <code>my_record()</code> of <code>esp8266-wifi-mic.ino</code> file will send multiple payloads to the MQTT server. So how does the MQTT server know which published transmission belongs to which RAW file?</p>
2019-01-29T09:46:19.130
<p>As mentioned in the comments...</p> <p>Assuming the controller has a button that will trigger the gate to open then you just need to find a way to "press" that button.</p> <p>The simplest way is probably to attach a relay across the 2 wires to the button. The relay can be triggered by something like a esp8266 or the GPIO pins on a raspberry pi.</p> <p>The question is then if it's a momentary press or if it needs to be held while the gate opens.</p> <p>If it's momentary then the relay just needs activating for a second, if it's press and hold then you will need to activate the relay for the length of time it takes for the gate to open.</p> <p>That is a very simple approach and doesn't cover things like feedback on if the gate is open/closed.</p>
|smart-home|raspberry-pi|arduino|
Smart gate - where to start?
3844
<p>I have this idea of writing an app/something to make the gate, that we have in front of our house, smart. But I just don't know where to start. I assume that the gate uses IR because we have remotes for it, but I'm not sure. The gate also has a codepad and I want everything to work simultaneously. Meaning that when I'm done with my project the remotes and code still work. </p> <p>So, can anyone help me to get started with this project? </p> <ul> <li>Would it be logical for the gate to use IR? What would it use otherwise? </li> <li>Should I try to read and emit the same signal using a Pi/Arduino? </li> </ul>
2019-01-29T09:51:37.080
<p>I find helpful this project too: <a href="https://github.com/nccgroup/nOBEX" rel="nofollow noreferrer">https://github.com/nccgroup/nOBEX</a></p> <blockquote> <p>nOBEX allows emulating the PBAP, MAP, and HFP profiles to test vehicle infotainment systems and similar devices using these profiles. nOBEX provides PBAP and MAP clients to clone the genuine virtual filesystems for these profiles from a real phones. This means downloading the entire phone book and all text messages. Raw vcards, XML listings, and MAP BMSG structures are stored, and can be modified as desired for negative testing. nOBEX can then act as a PBAP and MAP server, allowing vehicles and other devices to connect to it and retrieve phone book and message information. Vcards, BMSGs, and XML listings are sent exactly as saved, allowing malformed user modified data to go through. Since most vehicle head units require HFP support before they attempt using PBAP and MAP, nOBEX also provides rudimentary support for HFP. It will send back user customizable preset responses to AT commands coming from the vehicle's head unit. This allows mimicking a real cell phone.</p> </blockquote>
|raspberry-pi|bluetooth|
Raspberry Pi emulate phone Bluetooth stack
3845
<p>I'd like to use my Raspberry Pi to emulate phone Bluetooth stack. I'd like to get it paired with car radio, simulate arriving/outgoing call, simulate SMS and the more alerts/notifications I can manage.</p> <p>Below are some solutions I deepen, useful but not exactly what I need (thanks also to <a href="https://stackoverflow.com/questions/28076898/emulate-a-bluetooth-device-from-pc">https://stackoverflow.com/questions/28076898/emulate-a-bluetooth-device-from-pc</a> and <a href="https://android.stackexchange.com/questions/4538/can-i-emulate-a-bluetooth-keyboard-with-my-android-device">https://android.stackexchange.com/questions/4538/can-i-emulate-a-bluetooth-keyboard-with-my-android-device</a>):</p> <ul> <li><p><strong>oFono</strong> (<a href="https://01.org/ofono" rel="nofollow noreferrer">https://01.org/ofono</a>)</p> <p><em>allow controlling a real phone by another device. As other similar hands-free phones projects, it is not what I need: I would like to emulate Bluetooth stack, not controlling a real phone from another device. Devices involved should be car radio interface and Raspberry Pi: no real phone should be involved.</em></p></li> <li><p><strong>BT-Sim</strong> (<a href="http://btsim.sourceforge.net/index.html" rel="nofollow noreferrer">http://btsim.sourceforge.net/index.html</a>)</p> <p><em>is about simulating hardware interface. Not needed for the moment.</em> </p></li> </ul> <p>May someone suggests which is the best library or framework to work on, to accomplish this aim? Someone already played with a similar project? Any useful info?</p> <p>Thank you</p>
2019-02-04T16:00:24.650
<p>Since as per your spec this would be a linux based system, you could store the data locally on Sqlite database. </p> <p>Further check out SQLITE-SYNC. With this framework supposedly your application can work completely offline, then perform an automated Bidirectional Synchronization when an internet connection becomes available.</p> <p>So your app does not need to maintain routines to sync forward and/ back. </p> <p>Ref: <a href="https://ampliapps.com/sqlite-sync/" rel="nofollow noreferrer">https://ampliapps.com/sqlite-sync/</a></p>
|product-design|
How do I store data on iot device with only occasional access to the internet?
3856
<p>We are in the planning phase for a telemetry iot device with only occasional access to the internet.</p> <p>I found a lot of information online on how to store iot data in the cloud, what databases to use, how to calculate space requirements etc. what I'm missing is:</p> <p>How do I store the data locally on the client before sending it to the cloud?</p> <ul> <li>Newest data is the most interessting for us, but for a subset of metrics we want to keep all data points on device until they get transfered</li> <li>Minimizing device storage is not a top priority</li> <li>Battery life is not crucial our device will be connected to power as long as it is collecting data</li> </ul>
2019-02-10T16:14:21.177
<p>I had a colleague bring me a Google Home from the US on release day to the UK and it worked just fine before they were released in the UK.</p> <p>The only problem may be that some Google Assistant Actions will not work as they will be limited to certain geographical regions. Also there may be limitations based on where the Google id used with it is registered.</p>
|google-home|
Will google home mini work in foreign country (not supported)?
3869
<p>If I buy a google home in US and give it to a friend in Ethiopia, will it work if he speaks English to it or will the Ethiopian IP address cause an issue with the functionality of the google home device?</p>
2019-02-11T03:06:29.090
<p>This is a programming task. In your code, you can probably read values of the sensors separately. Now you have to write 2 threads(2 equals number of different sensors) that wait for data to be read, and put a mutex that holds the program until both data is read. When both data is read, unlock the mutex, combine your data variables in a JSON(or any other) and send them once. </p>
|raspberry-pi|lora|lorawan|data-transfer|
How to send data collected by two different sensors simultaneously?
3872
<p>I have two different sensors connected to my Raspberry pi and I have to send the data to other device. The transmission of data is through wireless network. I have been asked to send the data in synchronization (i.e. data collected together from sensors must reach/ process at destination together).</p> <p>I have no prior experience in field of Networking or IoT (I'm currently reading Data Communications and networking by Forouzan for the same). I have searched Google but no relevant website comes up.</p> <p>*Using LoRa for transmission.</p>
2019-02-11T20:20:20.140
<ol> <li>MQTT over the internet is perfectly possible (It's how AWS IoT, IBM IoT, Microsoft IoT offerings all work). You should probably use MQTT with TLS to ensure it's secure.</li> <li>You run a broker in the cloud and have devices (or other brokers bridge to it). Because a MQTT client connects out to a broker this works really well for devices that are behind routers using NAT.</li> <li>You don't need a broker at each house/location but it is a valid way to deploy things and use broker bridging to connect each broker to a broker in the cloud. This arrangement can allow things to continue to work when the connection to the outside world fails.</li> <li>Yes a single (load balanced cluster of) brokers in the cloud is a perfectly reasonable solution (see answer 1)</li> </ol> <p>Sharing a collection of links is probably off topic, but <a href="http://mqtt.org" rel="nofollow noreferrer">http://mqtt.org</a> is a good starting point.</p>
|mqtt|
Quick questions about MQTT of a beginner
3880
<p>I'm new to MQTT so I'll try to explain my situation first and then ask some questions about MQTT.<br><br> Basically what I'm trying to do is set up some sensors in two houses, A and B, and be able to manage those sensors from my home, C, people from both A and B won't care if the sensors are working or not, they won't be doing any managing or checking so basically I need to be able to control and see the sensors' statuses from home. All the sensors have MQTT support and can send data using MQTT.<br><br> Alright so that's what I'm trying to do, I've been reading a bit about MQTT and watching some videos but I've noticed that in most examples everything is connected to the same router so I was wondering that maybe I can't do this with MQTT. <br> So my questions are:</p> <ul> <li>Is it possible to use MQTT over the internet?</li> <li>If so, how would you do it?</li> <li>Will each house need an individual broker?</li> <li>Could I have this work with only one broker home and have the sensors connect over the internet to the broker?</li> </ul> <p>Also I'd appreciate if you could share some links on literature about learning how to use MQTT (preferably using a Raspberry Pi since that's what I wish to use) and do home automation since this is a topic I'm really interested in. <br><br> Thanks to everyone who responds! </p>
2019-02-12T07:36:32.887
<p>The commands e.g. on/off, set brightness are all handled in the language the device is set to. They are then converted to a set of enum values that are passed to the backend for control as part of a JSON object. The Google Home Smart Device API is here for a list of available commands and what gets passed: <a href="https://developers.google.com/actions/smarthome/" rel="nofollow noreferrer">https://developers.google.com/actions/smarthome/</a></p> <p>As for device/room names they are matched against the language they are entered in. So if you name a room Living Room in English and then try to access it with the Japanese for Living Room "リビングルーム" I don't think that will work. Once a device is matched by name it is then converted to a unique id provided by hardware manufacturer.</p>
|google-assistant|smart-lights|
Are underlying commands language-agnostic between smart speakers and IoT devices?
3883
<p>I'm an American who lives in Japan. I can speak Japanese reasonably well, but I normally keep my Google Home speakers and Google Assistant app set to U.S. English for ease of use. Now I'm thinking of buying Google-compatible smart ceiling lights for my livingroom from a Japanese manufacturer. (In Japanese homes, most rooms have a universal attachment in the ceiling that accepts any style of ceiling light - the U.S. needs to learn that lesson!) But of course the example voice commands are all in Japanese - I'm sure they are assuming all their users have their smart speakers set to Japanese. I asked a light manufacturer about it, but they merely answered that their lights are only for use in Japan and therefore they have not tested them with English and cannot advise me.</p> <p>So my question (thinking like a programmer) is: When a voice command is given to Google Home to control another device (e.g. "Dim the lights in the living room"), I know Google Assistant will first parse the sentence to determine which device(s), e.g. the room called "livingroom" and device type called "lights". But then, does it send the rest (in this case, the word "dim" in English) to the device as-is, or is there an underlying code that is not English or Japanese at all but part of a common command set for smart lights? I know that I might have to experiment to figure out what English syntax works with any given model (particularly fancier stuff like changing the color temperature or setting the brightness to a specific level), but I just want to know if it's possible.</p> <p>This is actually the second IoT device I've bought in Japan, but the first was only a cheap plug (so the only commands are on and off) and all its communication was through Smart Life (almost certainly a server in China, and it supports multiple languages), so that's not a very good test.</p>
2019-02-13T18:43:23.027
<p>Another option is to create your own service to handle it. The service needs to be connected to your local MQTT broker on local RPi (acts as local gateway) as you have specified. The local gateway generally used to provide data filtering, but you can use it for temporary storage of data that your device sends. You can use python or any programming language that support MQTT client library. The client service is nothing but another MQTT client running on gateway and can perform logic and database operations. This client service, that you build, receives the data as soon as it arrives on MQTT broker at local gateway, try to forward it to CloudMQTT. If it couldn't connect to CloudMQTT due to no Internet connectivity available, it stores the data to temporary database with flag to keep track whether that data is published to CloudMQTT or not. The client service (always running in background) try to fetch this data from database and resend it. If that data is successfully sent to CloudMQTT, the service updates the flag as the data is sent. You can later on delete all data from database whose flag value indicates that it is sent to CloudMQTT. So in this case you local RPi machine which runs a local broker (of course with database like <a href="https://randomnerdtutorials.com/sqlite-database-on-a-raspberry-pi/" rel="nofollow noreferrer">SQLite</a>) can actually acts as an intelligent gateway. This solution should give you ability to send all collected data even if there is no Internet connectivity for hours and even for a day or more because you have all the backup.</p>
|mqtt|
Alternatives to MQTT for local / remote bridging
3891
<p>Amongst the plethora of MQTT questions, I am wondering what are some alternatives to MQTT for when <strong>all</strong> messages sent to a topic need to be kept, and in a queue for a new subscriber.</p> <p>At my company, we have remote deployments that we manage, and we are wanting to use MQTT for local data collection. The idea would be that data would be sent to the local broker onsite (running on a Raspberry Pi, for example), and the broker would have an MQTT bridge with our <a href="https://www.cloudmqtt.com/" rel="noreferrer">CloudMQTT</a> deployment. If connectivity would be lost, the messages would collect locally, and synchronize again when connectivity was re-established.</p> <p>The set up is typical, like this:</p> <p><a href="https://i.stack.imgur.com/B7voE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B7voE.png" alt="Simple MQTT bridging"></a></p> <p>For my example, on the left side would be many (around ~100) MQTT local brokers running at each location, and on the right would be the CloudMQTT server we pay for.</p> <p>When I read the article <a href="https://www.hivemq.com/blog/mqtt-essentials-part-8-retained-messages/" rel="noreferrer">MQTT Essentials Part 8: Retained Messages</a>, this part was disappointing:</p> <blockquote> <p>A retained message is a normal MQTT message with the retained flag set to true. The broker stores the last retained message and the corresponding QoS for that topic. Each client that subscribes to a topic pattern that matches the topic of the retained message receives the retained message immediately after they subscribe. <strong>The broker stores only one retained message per topic.</strong></p> </blockquote> <p>Essentially what this means is that there would have to constantly be a subscriber on the CloudMQTT server listening for all incoming events from all of our locations; otherwise, data might be lost.</p> <p>MQTT seems built to only keep the most recent message; are there any other software packages that can do this local &lt;=> remote syncing, but keep all messages?</p>
2019-02-15T21:46:08.863
<p>I'll start by saying I've not looked at one of these in person yet,</p> <p>But intercepting the call to the Amazon backend and responding to trigger the light is going to be tricky.</p> <p>I fully expect these devices to be hitting HTTPS endpoints (I really hope they are) in AWS which means that even if you set up a firewall rule to redirect the request to something local it would need to respond with a matching TLS certificate.</p> <p>You'll need to use something like wireshark to grab a button push to double check</p>
|aws-iot|aws|amazon-iot-button|
Dash Button indicator Light
3900
<p>I have some Amazon Dash buttons that I want to use for multiple different scenarios, I don't know quite what yet, but I am unable to find a good way to make the indicator light turn green after I have pushed it.</p> <p>I am aware that Amazon has a more expensive version of the button that can do this, I am going to buy one to play with, but I want to get this working if possible.</p> <p>I found <a href="https://github.com/ide/dash-button/issues/59" rel="nofollow noreferrer">this</a> conversation and I looked at the links that are referenced and just like the comment says it is just not a very good way to do things.</p> <p>I have some c# running as a service on my computer and all it is doing, for now, is looking for MAC address then logging when that happened.</p> <p>I don't know much about web request stuff, but I am willing to really dig into it, as long as I don't have to do something like put custom firmware on my router.</p> <p>I am using PacketDotNet and SharpPcap in my c# solution.</p> <p>How do I return the 200 response, or anything else, to the Amazon Dash button to make the indicator turn green?</p> <p>Any help is greatly appreciated.</p>
2019-02-27T18:00:30.060
<p>Of course it will detect other sounds in the sense of noticing other sounds — that's what a sound sensor is for. Getting the trigger exact enough will probably be the most challenging part of the project — unless there is a library that detects the sound profile of claps.</p> <p>That shouldn't deter you though. It's just about refining the detection bit by bit. Just know that you'll have misfires in the detection in the beginning. A lot.</p> <p>The first Amazon Echos were quite bad at properly detecting the wake words. Nowadays they can detect it a few rooms away, while the tap is running, and the TV, and the Police drives by with sirens.</p>
|sensors|arduino|sound|
For my "switch on light with clap project " will sound sensors detect other sounds other than a clap and switch on the light?
3937
<p>I am just a high school student who is trying to self-learn for my IoT home automation projects.Please excuse me if this question sound silly to you.This time I am working on this "clap to switch on the light" project. I learnt that to detect the sound from the clap, I can use a sound sensor.In my case, the sensor will be connected to an Arduino UNO which will then be connected to a relay module that will control the lights.My question is: Will the light switch-on because of a loud sound other than a clap, because obviously it would be really annoying for the light to switch on and off when I don't want it to.</p>
2019-02-27T19:57:56.617
<p>In case of all LoRaWAN messages, except JoinAccept, the MIC must be accessible by interim network components, like forwarding network server who know the NwkSKey and can verify message integrity and drop the message if it fails.</p> <p>However, interim network elements should have no information about the integrity of a JoinAccept message. This is to increasing the level of security.</p> <p>If someone knew the MIC of a JoinRequest, then he could guess an AppKey/JoinNonce and check if they are correct programmatically. If that guy invests in a large computer farm and operates that network for a long time, he may find out what the AppKey is. It is not possible if MIC is not accessible.</p> <p>You may say that that bad guy can do the same for regular uplink messages. That is true, but please note that UL messages are signed and encripted by session keys and not by the master key.</p>
|security|lorawan|standards|cryptography|
Why does LoRaWAN 1.1 use MAC-then-encrypt for Join-Accept messages?
3939
<p>In the LoRaWAN 1.1. standard it says on Page 16 "For Join-Accept frame, the MIC field is encrypted with the payload and is not a separate field".</p> <p>In other scenarios the message integrity code is a separate field, so the message is encrypted, before the MIC is generated, if I understood this correctly. <a href="https://crypto.stackexchange.com/questions/202/should-we-mac-then-encrypt-or-encrypt-then-mac">It is considered to be more secure to use Encrypt-then-MAC</a>.</p> <p>Do you have an idea why MAC-then-encrypt is used for this message type?</p>
2019-02-28T06:39:42.953
<p>You can send commands to a Google Home device on the same network to play a MP3 from a URL. With this you can have the Home Device play arbitrary messages.</p> <p>There are libraries to do this e.g. for nodejs <a href="https://www.npmjs.com/package/google-home-notify" rel="nofollow noreferrer">google-home-notify</a> that takes a string, sends it to Google's Text to Speech API and then has the Google Home play the output.</p> <p>The <a href="https://github.com/nabbl/google-home-notifier/blob/master/examples/sendsimpletext.js" rel="nofollow noreferrer">example</a> code for this node is very simple:</p> <pre><code>var googlehomenotifier = require('../')("192.168.178.131", "en-US", 1); googlehomenotifier.notify("Some crazy textmessage", function (result) { console.log(result); }) </code></pre> <p>Where <code>192.168.178.131</code> is the IP address of the Google Home device on your local network.</p> <p>There is also a Python <a href="https://github.com/harperreed/google-home-notifier-python" rel="nofollow noreferrer">version</a> that might plug into Domoticz easier.</p>
|google-home|google-assistant|domoticz|
Is it possible to trigger Google Assistant to say things via 3rd party?
3941
<p>Basically I want to make my Google home device make an announcement when my mail arrives via Domoticz sensor.</p> <p>I know IFTTT can trigger Domoticz via web URL but is there a function to trigger Google Assistant in a similar fashion?</p>
2019-02-28T15:41:06.293
<p>Just for reference, it worked fine almost out of the box. I only needed to connect rx, tx, vcc, gnd. Then I setup the config file as outlined and plugged it in. Every time I run the ESP32, all of the output is saved to a new file. Couldn't have been simpler.</p>
|esp32|
Can I log serial output from ESP32 using Sparkfun's OpenLog?
3943
<p>I am working on a project that largely revolves around the ESP32 WROVER. The device is going to be at a remote location for testing, but I'll need to be able to get the serial output. I think that the OpenLog might be a good solution here, but I'm not 100% positive. I've been reading this, and have come away with more questions (mostly because it is rather Arduino specific:) <a href="https://learn.sparkfun.com/tutorials/openlog-hookup-guide" rel="nofollow noreferrer">https://learn.sparkfun.com/tutorials/openlog-hookup-guide</a></p> <p>My board has 4 pins broken out from the ESP32: rx, tx, vcc, gnd. Will those be sufficient to write to the OpenLog? Or do I need to figure out how to get a wire for dtr and blk (gnd?)</p> <p>Do I need to somehow include the OpenLog library in my source for the ESP32 program? It is written using PlatformIO using C++, not Arduino. I am hoping that using a config.txt file I can just have it write all serial input it receives to a single file. Is that the case?</p> <p>Thanks for any guidance here!</p>
2019-02-28T21:06:12.100
<p>Finally, I was able to achieve my target to be able to query Alexa for custom sensor readings without the need to program my own skill. </p> <hr> <p>The following <strong>elements</strong> are used in my setup: </p> <ol> <li>A <strong>Raspberry Pi</strong> in my home LAN with <strong>Node-RED</strong> installed on it</li> <li>Some source for a sensor signal (in my case a <strong>Nodemcu</strong> with a <strong>DHT22</strong> sensor that sends the humidity readings via MQTT to the Raspberry Pi, where the <strong>MQTT broker</strong> is running)</li> <li>An <strong>Amazon Echo</strong> (which does not need to be in the same LAN as the Raspberry Pi!)</li> <li>The <a href="https://alexa-node-red.bm.hardill.me.uk/" rel="nofollow noreferrer"><strong>Node-RED Alexa Home Skill Bridge</strong></a> node by @hardillb</li> <li>The <a href="https://github.com/thorsten-gehrig/alexa-remote-control" rel="nofollow noreferrer"><strong>Alexa-remote-control shell script</strong></a> that lets you <a href="https://iot.stackexchange.com/a/4022/9008">issue any text-to-speech command to your Alexa devices</a></li> </ol> <hr> <p>Here are the <strong>steps</strong> that one needs to take: </p> <ol> <li><strong>Register</strong> a new device in @hardillb’s <a href="https://alexa-node-red.bm.hardill.me.uk/" rel="nofollow noreferrer">Node-RED Alexa Home Skill Bridge</a>. Any device and name combination should do. I chose a smart plug and called it “Humidity_at_home”.</li> <li>Now let Alexa <strong>search</strong> for new devices.</li> <li>Create an <strong>Alexa routine</strong> in the Alexa app, where you use a custom voice trigger (in my case: “Alexa, what is the current humidity level”) to switch the virtual device “Humidity_at_home” on.</li> <li>In Node-RED <strong>configure a Alexa Home Skill Bridge node</strong> for the device “Humidity_at_home”. Depending on the command from Alexa (“Humidity_at_home” plug on/off) the <code>msg.command</code> element of the node output will have the value <code>TurnOnRequest</code>/<code>TurnOffRequest</code>.</li> <li><p>In node-red, when the “Humidity_at_home” node is triggered and outputs <code>msg.command = "TurnOnRequest"</code>, call the <strong>Alexa-remote-control shell script</strong> via the <strong>exec node</strong> issuing a text-to-speech command to an Echo device, e.g. using this command: </p> <pre><code>alexa_remote_control.sh -d "Your Echo's name" -e speak:'Here is the text string you construct as answer to your Alexa request for the humidity level value' </code></pre></li> </ol> <hr> <p>Needless to say that you can use any kind of virtual device and any kind of setting of the device to trigger actions in node-red.</p>
|smart-home|mqtt|alexa|sensors|node-red|
Custom sensor readings with Amazon Alexa
3946
<p><strong>TLDR:</strong><br> I want to query Alexa for the current humidity sensor reading by asking: </p> <blockquote> <p>"What is the current humidity, Alexa?"</p> </blockquote> <p>The answer should be similar to: </p> <blockquote> <p>"The current humidity level is 56.7%"</p> </blockquote> <p><strong>Here is my setup:</strong></p> <ul> <li>I have a bunch of ESP8266 with a variety of sensors, which publish the sensor readings via MQTT over WLAN</li> <li>The MQTT broker is running on a Raspberry Pi in the same LAN</li> <li>I also have NodeRed running on the Raspberry Pi, where I am using the <a href="https://alexa-node-red.bm.hardill.me.uk/docs" rel="nofollow noreferrer">Node-RED Alexa Home Skill Bridge</a>, which works perfectly for sending commands to the different Nodes in my network</li> </ul> <p><strong>What I am looking for:</strong></p> <ul> <li>I would like to be able to ask Alexa for the current sensor reading of any of my sensors and have her answer with the proper value</li> <li>In theory I understand that the Node-RED Alexa Home Skill Bridge at least should provide the oppurtunity to query for the temperature reading of a device, when you are in the US/UK.</li> <li>However, although I tried, this doesn't seem to work on my German Alexa</li> <li>Futhermore, I would of course also prefer to ask Alexa e.g. for the current humidity directly and not having to misuse the temperature query</li> </ul> <p><strong>Is there any way to achieve what I intend to do, e.g. by using an already existing skill or maybe a NodeRed extension?</strong></p>
2019-03-04T19:40:12.417
<p>After close to three months of communicating with Samsung support.... I finally received some steps to solve my issues, but I'll step through what they had me do.</p> <h2>Reset Your Hub</h2> <ol> <li>Unplug your hub from DC power</li> <li>Remove all backup batteries from your hub</li> <li>Wait about 15-30 mins for all capacitance to dissipate</li> <li>Plug your hub back into DC power</li> <li>Wait for the LED light to turn solid green</li> <li>Try to add your device(s)</li> </ol> <h2>Reset Your Device</h2> <p>There are a number of guides that Samsung provides for each device on how to properly reset your device. The latest SmartThings app provides a link that takes you directly to the appropriate page when it appears to be taking too long to connect.</p> <p>For the most part, it consists of either pressing and holding an available connection button for 5 seconds until the LED begins blinking green or blue. An alternative, typically for older models, is to remove all power from the device (battery, plug, etc.) press and hold the connection button while re-introducing power to the device. Continue holding the connection button until the LED begins flashing red or yellow at which point you should release the button.</p> <h2>Verify Your Radio</h2> <ol> <li>Log into your <a href="https://graph-na02-useast1.api.smartthings.com/" rel="nofollow noreferrer">SmartThings Groovy IDE</a> using either your SmartThings or Samsung account (whatever you log into your app with).</li> <li>Go to <strong>My Hubs</strong></li> <li><p>Look at the row for <em>ZigBee</em>.</p> <p>a. Verify the <em>State: Functional</em></p> <p>b. Verify the <em>OTA: enabled for all devices</em></p></li> </ol> <p>For me, <em>OTA</em> was disabled, if this is the case scroll down to <em>Utilities</em> and click on <em>View Utilities</em> and follow the next few steps:</p> <ol> <li>Look at the row for <em>ZigBee Utilities</em></li> <li>If you have no utilities, perform the <strong>Soft Reset on Your Hub</strong></li> <li>Click on the <em>Allow OTA</em> tool</li> <li>Go back to <em>My Hubs</em></li> <li>Verify the <em>OTA</em> setting is now set to enabled for all devices*</li> </ol> <h2>Soft Reset on Your Hub</h2> <p>When verifying the configuration of my ZigBee radio, I did not have any utility options for my device. So, I was instructed to perform what they called a Soft Reset of my hub.</p> <ol> <li>Unplug all power (DC and battery) from the hub</li> <li>Using a small tool or paperclip, press and hold the reset button in the back</li> <li>While continuing to hold the reset button, plug the DC power back into the hub.</li> <li>When the LED begins flashing yellow, release the reset button. For me, the light almost immediately began flashing as soon as I plugged in the DC power. Also, the flashing wasn't a normal flash, it looked like a glitchy "my battery is about to die kind of flash". For those that remember how the LED indicator light would flash on the Gameboy, that's what it's like.</li> <li>The hub will go through a series of updates, transitioning the LED light between blue, magenta (or pink), and off.</li> <li>When the hub is ready, it will return to a solid green.</li> <li>After the hub is ready, return to the online portal and run through the steps for verifying and setting the OTA configuration to <em>enabled for all devices</em>.</li> </ol>
|zigbee|samsung-smartthings|
SmartThings Zigbee Devices won't connect
3955
<p>I upgraded my v1 SmartThings Hub to the v2 SmartThings Hub. According to some online forums, I made sure to properly reset and remove all of my devices from my previous hub. Next, I installed my new hub (v2) and removed my old hub (v1) via the online portal (Groovy IDE or something...). Finally I began adding my devices, most were from the previous hub, a few were brand new that I bought with the v2 hub. </p> <p>None of the "Samsung SmartThings" devices connected. I've realized that all of the devices that have successfully connected to the v2 hub are Z-Wave, so I can assume that there is an issue with the Zigbee devices or the Zigbee radio in the v2 hub. Since half of them connected on the old hub, I'm assuming it's with the hub's radio.</p>
2019-03-08T11:15:43.230
<p>Yes, the ZigBee standard defines the protocol for the OTA transfer, and also the format of the OTA files. So long as manufacturers implement the standard, then it is possible to use any hub to load new firmware into any device.</p> <p>You can find these definitions in the ZigBee Cluster Library documentation.</p>
|zigbee|
Can a Zigbee device receive an OTA update through a different manufacturers hub?
3966
<p>One of the supposed advantages of Zigbee application profiles such as Home Automation is that devices from different manufacturers can work together, so in theory you can use one generic hub (e.g. SmartThings) instead of several proprietry ones.</p> <p>But what happens when a device wants to check for updated firmware over the internet? Does Zigbee or the application profile have some kind of standard for this?</p>
2019-03-16T21:09:24.707
<p>This sounds like you need to use strain gauges and build your own.</p> <p>While bluetooth enabled strain gauges do exist, they are used as power meters for bicycles by measuring the amount of bend in the cranks. But the cranks only bend imperceptible amounts.</p> <p>The other possible option would be a piezoelectric material on the surface of the piece, but again you'd have to build your own circuit to interpret the voltage change and calibrate it.</p> <p>There isn't enough information about the setting to say if a accelerometer/gyroscope approach would work. We don't know if the entire artefact is fixed (e.g. bolted down) or can move without bending the strip.</p>
|bluetooth|
Measure strip bending
3987
<p>I though of a bluetooth strip device, that would tell me the value of the angle and direction in 3D. I tried to search for one, but did not found anything. The only idea I've got so far is to set up a few gyroscopes in a straight line and constantly measure their position and thus calculate the bending value. Is there a device, that can do that?</p>
2019-03-17T15:14:53.113
<p>Jsotola is right. The cars use the distance to know when they have to close doors.</p> <p>In your case, you can use an extra sensor in the inside part. Maybe, an RFID or NFC tag, an ultrasonic sensor, or a laser or infrared sensors. With this element, you can detect if people are in the inside part.</p>
|wireless|door-sensor|smart-lock|
How to differentiate the side of a door?
3988
<p>I would like to design a lock which will automatically open once someone is on a specific side of a door ("outside"). </p> <p>There are easy (software) and complicated (hardware) elements to take into account but one of the points which i do not know to approach is the side detection.</p> <p><strong>How is this typically done?</strong> </p> <p>An example is my car which will let the door close only if the key is outside the car. If it is next to the car outside, the car will lock. A few centimeters in (the key touches the door from the inside) and it is not possible to lock the door anymore.</p>
2019-03-22T10:08:22.507
<p>I had a look into the source code of some projects and it looks like all multi-octet-fields that are encrypted are left Big Endian. The rest is little Endian and is even used as Little Endian during the calculation of e.g. the MIC or as a nonce in encryption. During MIC calculation, the internal 4 Byte counter is used as the FCnt value but still as little endian.</p> <p>I made a table for this: <a href="https://i.stack.imgur.com/O8Gjp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O8Gjp.png" alt="table of LoRaWAN 1.1 Byte-order for fields in Data Up messages"></a></p>
|lora|lorawan|standards|
How does endianness work in LoRaWAN1.1?
4001
<p>On Page 9 in the <a href="https://lora-alliance.org/resource-hub/lorawantm-specification-v11" rel="nofollow noreferrer">LoRaWAN 1.1. specification</a> it says:</p> <blockquote> <p>The over-the-air octet order for all multi-octet fields is little endian.</p> <p>EUI are 8 bytes multi-octet fields and are transmitted as little endian.</p> </blockquote> <p>How are multi-octet fields defined in the standard?</p> <p>Are these fields seen on the highest level of detail on page 16 (e.g. for payload-messages: MHDR, DevAddr, FCtrl, FCnt, FOpts, FPort, FRMPayload, MIC)?</p> <p>And what is meant by &quot;over-the-air&quot;?</p> <p>On page 26 the MIC calculation is described as calculating the CMAC of the message and some extra information and then only taking the first four bytes of it. Does this mean, the CMAC is seen as big endian first, then the MSB are &quot;turned around&quot; or is it more like the CMAC is already interpreted as little endian and the LSB is inserted and sent without changing it?</p>
2019-03-23T01:10:45.620
<p>A roundabout answer.</p> <p>As per info at bottom of this answer, you get charged by a number of messages written, not by number of times someone reads the data.</p> <p>That means you can experiment with multiple GET requests without having to worry about a cost, so run your script and see what happens.</p> <p>NOTE: make sure that your script does not write any messages</p> <p>Here is an excerpt from <a href="https://thingspeak.com/pages/license_faq" rel="nofollow noreferrer">https://thingspeak.com/pages/license_faq</a></p> <blockquote> <p><strong>4. What is a message?</strong></p> <p>ThingSpeak stores messages in channels. A message is defined as a write of up to 8 fields of data to a ThingSpeak channel. For example, a channel representing a weather station could include the following 8 fields of data: temperature, humidity, barometric pressure, wind speed, wind direction, rainfall, battery level, and light level. Each message cannot exceed 3000 bytes. Examples of messages include:</p> <ol> <li>A write to a ThingSpeak channel using the REST API or target-specific ThingSpeak libraries<br /> a. From an embedded device<br /> b. From another computer</li> <li>A write to a ThingSpeak channel using MQTT</li> <li>A write to a ThingSpeak channel from MATLAB using thingspeakwrite or the REST API</li> <li>A write to a ThingSpeak channel inside ThingSpeak using the MATLAB Analysis or MATLAB Visualizations Apps</li> <li>Any writes to ThingSpeak triggered by a React or a Timecontrol</li> </ol> </blockquote> <p>and</p> <blockquote> <p><strong>19. Does using any of the apps in ThingSpeak™ affect my messages in any way?</strong></p> <p>Your messages are consumed when you write data to a ThingSpeak channel. If you write data to a channel from one of the ThingSpeak Apps, you will consume messages. For example, if are using the MATLAB Analysis app to compute a value that is derived from data you have stored in ThingSpeak channels, you will not consume messages, but if you save/write that value to another channel, you will consume messages.</p> </blockquote>
|rest-api|
Thingspeak API Limit GET Requests?
4004
<p>Is there a GET request rate limit for the Thingspeak API? So far I've only been able to find information about POST requests. </p> <p>I have a data acquisition R script that I would like to run in parallel. I'm pausing ~1-second between each request currently, but when parallelized, I would be making 8 requests every 1-second instead.</p> <p>Thanks!</p>
2019-03-25T09:53:02.653
<p>No, this is not possible with a MQTT broker.</p> <p>There are 2 situations where a broker will cache a message on a given topic.</p> <ol> <li><p>When a message is published with the retained bit set. This message will be stored and delivered to any client that subscribed to a matching topic before any new messages. This is a single message per topic and a new message with the retained bit will replace any current message.</p> </li> <li><p>If a client has a persistent subscription and is offline, the broker will queue all messages sent while it is offline to deliver when it reconnects (unless it sets the cleanSession bit to true in it's connection packet)</p> </li> </ol> <p>The only way to achieve what you describe is to have another client batch up the messages and publish the collection on a different topic.</p> <p>EDIT:</p> <p>The MQTTv5 spec supports setting a <a href="https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901112" rel="nofollow noreferrer">Message Expiry Interval</a> value on a message.</p> <blockquote> <p>If present, the Four Byte value is the lifetime of the Application Message in seconds. If the Message Expiry Interval has passed and the Server has not managed to start onward delivery to a matching subscriber, then it MUST delete the copy of the message for that subscriber [MQTT-3.3.2-5].</p> <p>If absent, the Application Message does not expire.</p> <p>The PUBLISH packet sent to a Client by the Server MUST contain a Message Expiry Interval set to the received value minus the time that the Application Message has been waiting in the Server [MQTT-3.3.2-6]. Refer to section 4.1 for details and limitations of stored state.</p> </blockquote>
|mqtt|publish-subscriber|
Are MQTT Brokers able to retain/cache some data for a certain amount of time and then send to the subscribers?
4010
<p>I currently have a Mosquitto MQTT Broker on which some IoT Nodes publish their information on a specific topic. I have an instance of <code>Telegraf</code> from InfluxData running that subscribes to this topic and stores the information into <code>InfluxDB</code>.</p> <p><strong>Requirements</strong>:</p> <p>I understand that the information once published on a topic will be sent out immediately to the subscriber from the Broker. I am looking for a retention or caching mechanism in MQTT Broker which can wait till there are X number of datapoints on the topic that are published and then sent to the Subscriber.</p> <p>Is there such a mechanism that exists in standard MQTT Brokers or does this go beyond the MQTT Standard?</p>
2019-03-26T17:54:58.957
<p>The short answer to this is you don't with any of the standard libraries (especially not Paho or the old Mosquitto Python wrapper).</p> <p>While MQTT doesn't require TCP it is best suited to being implemented on top of it and trying to use it over a serial port routed to LoRA radio will not be simple. It will require removing all the socket level code and replacing it with LoRA specific code and a LoRA addressing scheme to identify clients and the broker.</p> <p>I suggest you look at the following 2 things that may suit your needs.</p> <p>Firstly, look at the MQTT-SN <a href="http://www.mqtt.org/new/wp-content/uploads/2009/06/MQTT-SN_spec_v1.2.pdf" rel="nofollow noreferrer">spec</a>, this is an even lighter weight protocol that is better suited to serial like communication. </p> <p>Secondly, what might be easier is to look at how the <a href="https://www.thethingsnetwork.org/" rel="nofollow noreferrer">Things Network</a> works. This uses MQTT to pass messages to the correct LoRA gateway when then delivers the message to the correct client.</p>
|mqtt|raspberry-pi|lora|
Communicate with MQTT over LoRa
4016
<p>I have two Raspberry Pis with a LoRa module each (Microchip RN2483, connected over serial). How do I tell MQTT (in Python) to use the LoRa motes (/dev/ttyAMC0) instead of Ethernet or Wi-Fi?</p>
2019-04-04T07:53:50.873
<p>According to <a href="https://rads.stackoverflow.com/amzn/click/com/B079Q5W22B" rel="nofollow noreferrer" rel="nofollow noreferrer">this Amazon page</a>, the Teckin SP20 supports it. </p> <p>Also according to that Amazon page, the item does not have a UL listing. They claim to be <em>applying</em> for an ETL listing, which would be equivalent. In the meantime</p> <blockquote> <p>It is total safe,you do not have to worry at all. </p> </blockquote>
|smart-plugs|
Measuring electricity usage with TECKIN smart plug
4030
<p>I bought a smart plug, which doesn't seem to support measuring electricity usage with its native app. </p> <p>I am wondering what the chance is, that this is a pure software feature and if I would be able to measure electricity usage by simply using a different app?</p> <p>If yes, what app could I try?</p>
2019-04-04T16:03:03.037
<p>First of all, trying to launch the integration from an SSH session is a bad idea (it is missing environment variables among other things).</p> <hr> <p>With the old version of the integration (before 2.0.11), we somehow needed certificates with an associated policy for the things in the AWS IoT core. This bug is fixed in 2.0.11 (which is the current one at the time of writing this answer).</p>
|mqtt|aws-iot|lora|lorawan|
The Things Network AWS IoT integration
4037
<p>I have followed the tutorial for the AWS integration but cannot get the MQTT messages to display in the AWS IoT Core. I assume bridging is not working since I can get the messages in an external client from the TTN MQTT broker.</p> <p>I have followed the troubleshooting steps but there is nothing at all in the logs (neither in <code>app-1.log</code> nor <code>app-1.error.log</code>). I then used ssh to get into the EC2 instance and tried to manually launch the integration with</p> <pre class="lang-sh prettyprint-override"><code>/var/app/current/bin/integration-aws run </code></pre> <p>It then crashed with the following error</p> <pre><code> INFO Initializing AWS IoT client PolicyName=ttn-integration Region= FATAL Failed to get AWS IoT endpoint </code></pre> <p>When setting the <code>AWS_REGION=eu-west-1</code> environment variable, I get the following error:</p> <pre><code> INFO Initializing AWS IoT client PolicyName=ttn-integration Region=eu-west-1 INFO Found AWS IoT endpoint Endpoint=an3cfmjmy6od4.iot.eu-west-1.amazonaws.com INFO Created certificate ID=189326ec8e90c8abd0de2eef1fe6eb4a22c05495c49bcfe291860a6b45243acd FATAL Failed to attach policy to certificate Certificate=189326ec8e90c8abd0de2eef1fe6eb4a22c05495c49bcfe291860a6b45243acd Policy=ttn-integration error=ResourceNotFoundException: Policy not found status code: 404, request id: ff8df0e8-56f1-11e9-9368-e107e49fac68 </code></pre> <p>I assume there is some sort of configuration file missing but I cannot seem to find anything that mentions such a file.</p> <ul> <li>Does such a configuration file exist?</li> <li>Is there anything else causing this behavior?</li> </ul>
2019-04-04T16:48:01.723
<p>The I found quite quick and usefull- was like that:</p> <ol> <li><p>Connect IPCAM directly to PC's LAN port</p></li> <li><p>Change Adapter settings ( via Control Panel ) to <code>192.168.1.X</code></p></li> <li><p>logoff</p></li> <li><p>Connect using web page with default ip <code>192.168.1.10</code> and change to desired IP.</p></li> </ol> <p>** this youtube <a href="https://www.youtube.com/watch?v=FtFqQ9zpzq0" rel="nofollow noreferrer">tutorial</a> helped a lot !</p>
|digital-cameras|surveillance-cameras|ip-address|
Change IP address of IP CAM using CMS software
4040
<p>I'm using IP camera - as an input for my IOT project. IP cam default IP is <code>192.168.1.10</code> and in order to use it in my network I need to change its default IP.</p> <p>My network address is <code>192.168.3.x</code>. Seller instructions <a href="https://i.stack.imgur.com/RHjn3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RHjn3.jpg" alt="instructions"></a> are clear and reasonable, but my network address is not as his, causing <code>IP SEARCH</code> not to find this device.</p> <p>I know that it is connected as needed since my NVR detects camera, with exact IP, but changing IP cannot be done using NVR GUI...</p> <p>Any ideas ? </p> <p>Guy </p>
2019-04-09T21:55:01.143
<p>There are products like the <a href="https://sonoff.itead.cc/en/" rel="nofollow noreferrer">Sonoff</a> range that when flashed with the <a href="https://github.com/arendst/Sonoff-Tasmota" rel="nofollow noreferrer">Tasmota</a> firmware can emulate a Hue Bridge which might work, but this <a href="https://github.com/arendst/Sonoff-Tasmota/issues/3876" rel="nofollow noreferrer">issue</a> implies native HomeKit support is unlikely.</p> <p>But once you have the controllable device other tools like Home Assistant might allow you to add HomeKit support.</p>
|philips-hue|apple-homekit|
Connect dumb lamp to Hue/Homekit without switch
4052
<p>I would like to connect a dumb lamp to my Hue/Homekit setup.</p> <p>The lamp is powered by G9 LEDs and, as far as I know, there are no G9 smart bulbs. I am also unable to exchange the physical wall switches.</p> <p>I tried finding a smart component that could be injected between ceiling outlet and lamp, but couldn't find anything.</p> <p>Does such a component exist?</p>
2019-04-10T08:35:15.043
<p>I have <a href="https://softwarerecs.stackexchange.com/questions/55589/firmware-over-the-air-fota">a question about FOTA</a> which got no reply. So I researched &amp; posted my own answer, so that you don't have to reinvent the wheel.</p> <p>You can either use <a href="https://rauc.io/" rel="nofollow noreferrer">RUAC</a> which looks to be so good that it might be overkill, or you could work your way through (the most recommended) <a href="https://github.com/search?o=desc&amp;q=fota&amp;s=stars&amp;type=Repositories" rel="nofollow noreferrer">FOTA on GitHub</a>.</p> <p>If you don't choose one, there is enough FOSS there that you can read the documentation &amp; code to get a feel for how others do it, and establish your own guidelines.</p> <p>Please, if you find anything better, post it here, to help others. In fact, whatever you choose, please post it here. Thanks</p>
|over-the-air-updates|
Error Handling Fallbacks In IoT Software
4054
<p>When one designs software for remote devices and IoT, one has to consider how the system manages various failures, be it software or hardware. </p> <p>If the system recognizes a SW bug, it may notify the cloud and revert to a boot loader. If the system recognizes a HW peripheral issue, it may stop using it and notify the cloud. If the system happens into a fault where it must question its own sanity - let's say, when the NVM is unreliable, it may require a complete shutdown.</p> <p>This is a very big and important issue, on which the rest of the SW should be built.</p> <p>I believe this issue to be common enough for guidelines, tutorials and literature to be written about, so we don't have to reinvent it on our own in each of our individual projects.</p> <p>I would like to know if there is recommended literature, tutorials or guidelines for designing remote device software for robustness, especially regarding image updates.</p> <p>Edit: The focus here is not on error detection, but on how to design a sandbox, in which errors and faults can be treated safely in an IoT device environment.</p>
2019-04-10T17:05:59.847
<p><strong>TL;DR</strong>: Innr Zigbee light works with Home Assistant.</p> <p><strong>The long version</strong>: As with any other Zigbee light, there are two ways to connect it with Home Assistant:</p> <ul> <li>Pair the innr light with a Zigbee hub that is supported by HA. For example, innr works fine with the hue bridge which <a href="https://www.home-assistant.io/integrations/hue/" rel="nofollow noreferrer">integrate well with HA</a>.</li> <li>Connect an external Zigbee antenna with the raspberry pi, use the <a href="https://www.home-assistant.io/integrations/zha" rel="nofollow noreferrer">ZHA integration</a> and pair the innr light (and other Zigbee products as well) directly with HA, no external hub is required.</li> </ul> <p>When using the second option, make sure to purchase a <a href="https://www.home-assistant.io/integrations/zha#known-working-zigbee-radio-modules" rel="nofollow noreferrer">supported antenna</a>.</p> <p>On my setting, I had a hue bridge gen 1 which I ditched after Philips *decided to reduce its functionality. I'm using <a href="https://elelabs.com/products/elelabs-zigbee-shield.html" rel="nofollow noreferrer">Elelabs antenna</a> connected to the raspberry pi. The installation was not straightforward but is documented very clearly. All my Zigbee products work very well with it (hue, innr, tradfri, sonoff).</p> <p>Note that innr bulbs, at least the model I have, do not support the HA light integration option for transition.</p> <p>* <em><a href="https://www.youtube.com/watch?v=oCP26Tb2a4A&amp;t=115s" rel="nofollow noreferrer">Paul Hibbert about Philips decision to axe gen 1 (video)</a></em></p>
|zigbee|philips-hue|home-assistant|
If I want to integrate Innr with home assistant, does there need to be a component for it?
4056
<p>So I bought some "Innr" products, namely zigbee bulbs and plugs. For my hub, I decided to go the raspberry pi + hass.io route.</p> <p>After I bought everything however, I noticed, that there is actually no component on <a href="https://www.home-assistant.io/components/#search/" rel="nofollow noreferrer">https://www.home-assistant.io/components/#search/</a> for Innr specifically.</p> <p>Does this mean, that Innr product won't work with home assistant?</p> <p>Innr is supposedly compatible with the hue bridge. Home Assistant has a component for the hue bridge.</p> <p>Would it work if I use the hue component for that?</p> <p>I kind of thought, that most zigbee products should work with my zigbee equipped raspberry pi, even if there isn't a specific component. Is this assumption true?</p>
2019-04-11T19:18:20.290
<p>Of course, it may depend from sensor to sensor, but the Xiaomi Aqara advertised <a href="https://www.xiaomitoday.com/xiaomi-aqara-smart-water-sensor-now-available-for-8-99/" rel="nofollow noreferrer">here</a> is rated at IP67. If you look up the IP ratings and <a href="https://www.trustedreviews.com/opinion/what-is-ip68-ip-ratings-explained-2947135" rel="nofollow noreferrer">what they mean</a>, you find that the first numeral digit refers to its imperviousness to solids, and the second refers to its resistance to water.</p> <p>A rating of 6 on solids means that it has:</p> <blockquote> <p>Protection from contact with harmful dust.</p> </blockquote> <p>A rating of 7 on liquids means that it is:</p> <blockquote> <p>Protected from immersion in water with a depth of up to 1 meter (or 3.3 feet) for up to 30 mins.</p> </blockquote> <p>Given that you're not submerging it anything like a meter, it should be fine at whatever angle you put it at - especially if you arrange it such that you can remove it from the bathtub once you get there to turn the water off. Leaving it sitting in the water for extended periods of time might not be the best, as it is only verified up to 30 minutes.</p> <p>Hope this helps!</p>
|sensors|zigbee|xiaomi-mi|
Are immersion sensors typically waterproof?
4062
<p>This may sound dumb, but are flood sensors, like the one from Xiaomi typically completely waterproof?</p> <p>And if they are, would they work if installed vertically, facing with the side downwards, instead of bottom downwards?</p> <p>What I have in mind is installing one of those in my bath tub and have it notify me, when my bath tub is full.</p> <p>Or are there actually better solutions for this?</p>
2019-04-25T14:15:57.943
<p>It seems a single Device Provsioning Service (DPS) instance is enough to handle this scenario. All I have to consider is the attestation mechanism, in order to identify the devices to DPS. An intermediate certificate of the tenant can be uploaded in the DPS enrollment list and the same certificate will be used for signing the device certificates which belong to that tenant. </p> <p>So even if the application of each tenant is different I need to have common properties of DPS encoded in the device during manufacturing/factory setup process so that on internet connection available it can start first communication to DPS to get the specific IoT Hub information, which will be available in the enrollment list of the tenant.</p>
|security|azure|provisioning|
Should there be one device provisioning service for one IoT hub if it is associated with one tenant?
4094
<p>I have a multi-tenancy situation where I have created one IoT hub per tenant.</p> <p>Now if I have 5 tenants, and I create 5 Iot hub, should I also create 5 device provisioning service for those 5 IoT hubs (one for each)? So that when I onboard devices on a large scale I can programmatically add provision configuration in the app (I will be creating 5 apps for 5 tenants because each tenant's requirements will be different).</p>
2019-04-29T09:45:43.997
<p>I was able to solve it. Issue was resolved by following these steps:</p> <ol> <li>I create a new device with type <code>ti-sensortag2</code> from IBM watson IoT dashboard. instead of with type<code>iotsample-ti-cc2650</code></li> <li>passed <code>d:5j6cf4:ti-sensortag2:546c0e5301e1</code> in device id.</li> <li>Last thing that I rectified was to mention my organization id in broker id which I passed in app . So this came out to be my new broker id <code>tcp://5j6cf4.messaging.internetofthings.ibmcloud.com</code></li> </ol> <p>The new credentials for Cloud Setup in SensorTag app are shown in this screen-shot:</p> <p><a href="https://i.stack.imgur.com/zSTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zSTdO.jpg" alt="enter image description here"></a></p> <p>after that I clicked <strong>push to cloud</strong> toggle button and it started sending data to my iot service:</p> <p><a href="https://i.stack.imgur.com/4zYgO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4zYgO.png" alt="enter image description here"></a></p> <p>and I was able to receive data in recent events like this:</p> <p><a href="https://i.stack.imgur.com/3qR6L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3qR6L.png" alt="enter image description here"></a></p>
|mqtt|sensors|
Cannot connect SensorTag cc2650 to IBM watson IoT platform using their android app
4099
<p>I have SensorTag cc2650 from Texas Instruments with their android app installed in my Phone. I am getting an exception while connecting IBM Watson IoT. It worked fine with quick-start service but gives me exception when I connect it to my registered service with Platform.</p> <p><strong>Working fine with quick-start:</strong> </p> <p><a href="https://i.stack.imgur.com/mVEl4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mVEl4.png" alt="enter image description here"></a></p> <p>credentials that I have added in mobile App are:</p> <p><a href="https://i.stack.imgur.com/DM56A.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DM56A.jpg" alt="enter image description here"></a></p> <p><strong>The exception that I receives:</strong> </p> <p><a href="https://i.stack.imgur.com/LGY7U.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LGY7U.jpg" alt="enter image description here"></a></p> <p>On IBM Watson IoT platform I have created device here is that:</p> <p><a href="https://i.stack.imgur.com/VaF9m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VaF9m.png" alt="enter image description here"></a></p> <p>And this is screen-shot of IBM IoT Platform Dashboard:</p> <p><a href="https://i.stack.imgur.com/ioW9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ioW9L.png" alt="enter image description here"></a></p> <p>What is the issue? I read <a href="https://developer.ibm.com/recipes/tutorials/connect-a-cc2650-sensortag-to-the-iot-foundations-quickstart/" rel="nofollow noreferrer">this</a> recipe. There was procedure with screenshots for iOS but for android it was written that author will update android screen shots soon. but he hasn't updated yet. </p> <p>I also set TLS optional security as mentioned in <a href="https://developer.ibm.com/answers/questions/368104/unable-to-connect-ti-sensor-tag-to-iotf-registered/" rel="nofollow noreferrer">this</a> post put issue still persists.</p> <p><a href="https://i.stack.imgur.com/ayMg6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ayMg6.png" alt="enter image description here"></a></p>
2019-04-29T13:39:31.763
<p>Google finally added this as part of the home assistant app! It's now a default function.</p> <p>If you need a full guide with pictures I'll link some below. But simply put all you need to do is when you are making a routine add a new action, at the bottom of the list of action there is an &quot;add delay&quot; button. Just select this and set the time you want your delay to last. Then add the action you want to happen after the delay. You can repeat this to add as many actions and delays as you would like (there is likely some limit).</p> <p>Here are some detailed guides:</p> <ul> <li><a href="https://9to5google.com/2022/03/31/how-to-set-up-delayed-actions-in-google-homes-assistant-routines/#:%7E:text=a%20Voice%20command.-,Setting%20up%20delays,on%20the%20counter%2C%20tap%20Done." rel="nofollow noreferrer">9to5google: How to set up delayed actions in Google Home’s Assistant Routines</a></li> <li><a href="https://www.howtogeek.com/794085/how-to-add-a-delay-to-google-assistant-routines/" rel="nofollow noreferrer">how to geek</a></li> </ul>
|smart-home|google-home|smart-assistants|
Add delays to Google Home routines
4101
<p>I have a new Google Home Mini and I'm looking to add a lot of customization to it and the devices connected to it. The built in &quot;routines&quot; seems quite limited, the main thing I want to do is add delays into the routines but this is not a built-in function.</p> <p>I'm looking for a tool that would allow me to do add more customization especially delays. I'm a software developer so I'm open to programming or messing with config files but the simplest solution/tool would be best. As far as I can tell IFTTT does not have the tools to do what I want with my products.</p> <p>Here is an example of what I would like to do. I know how to do all of this except the delays.</p> <p>Example routines:</p> <ul> <li>turn light to yellow</li> <li>turn light to 20% brightness</li> <li>set volume to 10%</li> <li>play podcast</li> <li><strong>wait 10 min</strong></li> <li>set light to 15% brightness</li> <li>set light to orange</li> <li><strong>wait 10 min</strong></li> <li>set light to 10% brightness</li> <li>set light to red</li> <li><strong>wait 10 min</strong></li> <li>set light to 5% brightness</li> <li>stop podcast</li> <li>start white noise</li> </ul> <hr /> <p>My system and tools at my disposal:</p> <ul> <li>Google Home Mini.</li> <li>Merkury Smart Wi-Fi LED Bulb Color.</li> <li>Android phone.</li> <li>Windows 10 desktop.</li> </ul>
2019-04-29T17:31:20.557
<p><strong>TL;DR:</strong></p> <p>One way to get what you want is to use <strong>WiFi smart plugs</strong> featuring power monitoring, which can be integrated into <strong>Node-Red</strong>. Node-Red can be run for example on a <strong>Raspberry Pi</strong>. From there you can also store the measured values in a database.</p> <hr> <p><strong>Long answer:</strong></p> <p>I would suggest you look into smart plugs, which you can integrate into your LAN via WiFi. All of those plugs allow you to control them via WiFi and some also feature energy monitoring (at a higher cost for the plug, naturally). </p> <p>Typically, those smart plugs can be controlled via the manufacturer's app, which will also show you statistics about the energy usage, when the plug has that feature. </p> <p>However, often you can either update them with custom firmware (e.g. Sonoff products, which feature an easily reprogrammable ESP8266 chip) or they have APIs that allow you to control them without the need to use any cloud service via the manufacturer's app. </p> <p>One example are <a href="https://github.com/plasticrake/tplink-smarthome-api" rel="nofollow noreferrer">TP-Link smart plugs</a> that I use in my home controlled via <a href="https://flows.nodered.org/node/node-red-contrib-tplink-smarthome" rel="nofollow noreferrer">node-red</a>. Another example are <a href="https://flows.nodered.org/node/node-red-contrib-tuya-smart" rel="nofollow noreferrer">tuya-devices</a>, which are sold under a variety of brand names.</p>
|power-consumption|
Monitor Energy Usage
4102
<p>I would like to monitor energy usage of various appliances in my house. I hope to put a plug adaptor in which records the power usage. These are available commercially very cheaply, but often only have internal recording features. e.g <a href="https://www.amazon.co.uk/Energenie-429-856UK-Power-Meter/dp/B003ELLGDC/ref=asc_df_B003ELLGDC/?tag=googshopuk-21&amp;linkCode=df0&amp;hvadid=232000808334&amp;hvpos=1o4&amp;hvnetw=g&amp;hvrand=13314856035766492460&amp;hvpone=&amp;hvptwo=&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9047004&amp;hvtargid=aud-545671390501:pla-421167192880&amp;psc=1&amp;th=1&amp;psc=1" rel="nofollow noreferrer">this one</a> and the data is not recoverable. </p> <p>Preferably the adaptors would post the information to a central database as it is recorded "live" . But I am also happy for them to be recorded to a SD card or similar, which I can then analyse. </p> <p>Any advice or suggestions are appreciated? </p>
2019-04-30T12:04:04.493
<p>You can use <a href="https://www.ubeac.io/?utm_source=iot.stackexchange.com&amp;utm_medium=referral&amp;utm_campaign=4108" rel="nofollow noreferrer">uBeac IoT platform</a>. </p> <p>You should create a gateway and it will give you a unique URL (which you can change it later). Then, set the given URL in your device. </p> <p>You can configure the security options as below:</p> <ul> <li>HTTP/HTTPS with or without additional security header </li> <li>MQTT with or without credentials</li> </ul> <p>For debugging purpose, you can send data without any security settings.</p>
|mqtt|open-source|
connect a device automatically without configuring at iot platform end
4108
<p>My main goal is to connect devices but only with an access token and not by manually adding devices to the cloud IoT platform<br> Is there a platform to do that automatically? How to do it/Which platform is good for that purpose?</p> <p>why I asked this question?: I have used Node-RED, thinger.io, and thingspeak. However, in most of them, we should add device manually and make a device auth token is it possible to add a device automatically and subscribe to a topic(topic name is the same one) and get data without manually adding devices Basically, I need an automatic device registry either based on tokens or using username and password login.</p>
2019-04-30T16:01:35.477
<p>The answer is more or less same if you have seen this question <a href="https://iot.stackexchange.com/questions/4094/should-there-be-one-device-provisioning-service-for-one-iot-hub-if-it-is-associa">Should there be one device provisioning service for one IoT hub if it is associated with one tenant?</a></p> <p>Yes DPS isolates tenant configuration based on attestation mechanism. Let's say, if X509 certificate based attestation mechanism is used and there are 5 tenants. There will be single root CA for all the tenants and there will be unique intermediate certificate per tenant which are signed by this root CA. Now all the devices belong to specific tenant will use common intermediate certificate of the tenant for signing. This configuration should be created in DPS instance with one enrollment list per tenant.</p> <p>In short, when device communicates to DPS it will identify itself from the available intermediate certificate uploaded in DPS initially and depending on this DPS will make sure that the device belongs to particular tenant and allows it connect to specific IoT Hubs configured in the list.</p>
|security|azure|authentication|provisioning|
How enrollment list in Azure IoT Hub Device Provisioning Service isolate tenant specific configuration?
4110
<p>Some documentation and videos regarding Device Provisioning Service (DPS) says it can handle multitenancy but, it seem there is a confusion about how one tenant's configuration/data is isolated from other tenants. lets say I have 5 tenants and each of them having 1000 devices which I need to onboard to IoT hub of their respective tenant (assume I have one IoT hub per tenant). Enrollment group is a perfect thing to do in this situation but then do I have to create 5 enrollment list (one per tenant) and configure all the devices and their attestation mechanism in the list? if this is the right way then is the "attestation mechnism" makes difference in isolation of registering tenant's devices?</p> <p>Some of the document also says tenant isolation is based on the ID scope of DPS, which means I need to create 5 DPS instances (one per tenant) and give this ID scope and registration URL in the registration software during manufacturing process? If that is the case wouldn't this be a mess to handle it at manufacturing step to encode ID scope for each tenant's device?</p>
2019-04-30T19:57:11.377
<p>Don't use HTTP, it is the wrong choice for this sort of thing.</p> <p>Use a messaging based protocol (e.g. <a href="https://mqtt.org" rel="nofollow noreferrer">MQTT</a>) that way updates are pushed to the device rather than having it poll for them. This cuts down on bandwidth and you get (near) real time notification.</p> <p>The next question is where to run a message broker. Shared hosting (e.g. LAMP stacks) don't normally allow you to run brokers, but for something small moving the install to something like AWS lightsail will probably be cheaper anyway (but you will be responsible for setup/maintenance/security ).</p>
|esp8266|
NodeMCU (ESP8266) board controlled over shared hosting database
4111
<p>I have a NodeMCU (ESP8266) board that I want to control over the internet. I am trying to find a solution where I don't have to set up any configurations on my router like port forwarding. I came up with the following solution:</p> <p><a href="https://i.stack.imgur.com/zgJjs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zgJjs.png" alt="enter image description here"></a></p> <p>I have a website where the user changes the device status (with status I mean for example GPIO5 pin value HIGH or LOW) which is then saved to a database on a shared hosting server. The NodeMCU sends periodically (for example every 5 seconds) a HTTP GET request to the database. According to the value that is received from the database the NodeMCU board changes the pin value to HIGH or LOW. If NodeMCU changes it's status (for example a pin value from HIGH to LOW) the new device status is sent to the database with a HTTP POST request. The device also sends a HTTP POST request periodically (for example every 60 seconds) so the user can monitor the device status on the website. </p> <p>There are a few problems with this configuration:</p> <ol> <li><p>There is no real-time connection between NodeMCU and the user (there is always a delay in the device response)</p></li> <li><p>The device sends thousand of queries every day that are a load to the shared hosting server. For example if the GET request is sent every 5 seconds, that gives 17280 queries per day for one device.</p></li> </ol> <p>So my question is how practical is this configuration on shared hosting or any kind of hosting, what are the alternatives or improvements to this configuration and how to establish a connection with the NodeMCU so that the device sends a GET request to the database only when the device status is changed in the database by the user.</p>
2019-05-03T14:15:58.713
<p>You can ping some address and see if it is reachable. You can use <a href="https://github.com/marian-craciunescu/ESP32Ping" rel="nofollow noreferrer">this library</a> for the same.</p>
|arduino|esp32|
How to test internet connectivity of network to which esp32 is connected?
4120
<p>I want to know if the network to which esp connected has internet connectivity or not. How can I send a ping request using esp32?</p>
2019-05-03T17:59:32.783
<p>I'm going to ignore your first question for the reasons mentioned in the comment.</p> <p>As for the second. MQTT messages are published to topics (not channels), nearly all MQTT brokers allow you to configure Access Control Lists (ACLs) the allow you to control which topics each user can both publish and subscribe to.</p> <p>With a correctly setup ACL the client will only be able to publish data to a specific topic and if needed subscribe to specific topic used to send commands to that users device.</p> <p>Most brokers also allow substitutions in the ACLs so you can set up template entries that match to any user. e.g. for mosquitto you can use %u to match to the username and %c to match to the clientid so a topic pattern might look like this:</p> <p>write %u/data/# read %u/command/# </p> <p>This would let user only publish to topics which start with their username followed by <code>/data/...</code> and subscribe to topics with <code>[username]/command/...</code> at the start.</p> <p>As a pub/sub protocol MQTT is sometimes considered a broadcast medium, e.g. one publisher to many subscribers, but there is nothing to say you can't use topics that only the central controller publishes to and only one device subscribes to in order to get 1 to 1 messaging. In the new MQTT v5 spec there is even the concept of reply messages so you can specifically do request/response type messaging.</p>
|mqtt|security|
IoT <-> Cloud communication method
4121
<p>I am struggling with creation of multiple IoT devices (NanoPi like) which will be controlled via cloud server. I wish to use MQTT as I see this is the best method for small footprint. I know MQTT has certificates, passwords etc. But I have no clue how to implement secure one-to-one communication between device and cloud. I might create one channel per device, but I'm afraid anyone who will access the device physically, will be able to join any other channel. I would create channels manually and assign user+pass for them, but doing so for multiple devices will be waste of time. So I'm stuck with two aspects:</p> <ol> <li>Best implementation for Device &lt;-> Cloud data protocol. Will MQTT over TLS be fine?</li> <li>How can I automatically restrict devices to use only one channel? Like each device will register itself in the cloud with its own ID, and will use this ID as channel. But how to prevent rogue attacker from joining any other channels?</li> </ol>
2019-05-04T11:56:33.150
<p><a href="https://www.home-assistant.io/docs/ecosystem/backup/backup_github/" rel="nofollow noreferrer">This official guide</a> suggests that only the <code>config</code> directory should be version controlled and also a specific <code>.gitignore</code>. This, applied to a fresh install of Home Assistant 0.105.3, leaves five files:</p> <ul> <li><code>groups.yaml</code>, <code>scenes.yaml</code> and <code>scripts.yaml</code>, all empty</li> <li><code>automations.yaml</code>, containing only <code>[]</code></li> <li><code>configuration.yaml</code>, with the following contents:</li> </ul> <pre><code> # Configure a default setup of Home Assistant (frontend, api, etc) default_config: # Uncomment this if you are using SSL/TLS, running in Docker container, etc. # http: # base_url: example.duckdns.org:8123 # Text to speech tts: - platform: google_translate group: !include groups.yaml automation: !include automations.yaml script: !include scripts.yaml scene: !include scenes.yaml </code></pre>
|home-assistant|
Where can I find the default configuration of Home Assistant?
4125
<p>I decided I should start version controlling my Home Assistant configuration, and want to commit the unedited config first to see what changes I already made. Can I find a complete copy unedited of the config folder (and other relevant files if any) anywhere?</p>
2019-05-09T12:03:31.910
<p>It looks like a pretty standard Bluetooth Low Energy (BLE) device so assuming the manufacturer hasn't done something strange (e.g. like fitbit) then you should be able to use any language that has BLE GATT support to connect to the device and then subscribe to the characteristics for each of the different data fields. For NodeJS there is the <a href="https://www.npmjs.com/package/noble" rel="nofollow noreferrer">noble</a> which is pretty for building this sort of thing.</p> <p>There are mobile apps like the <a href="https://play.google.com/store/apps/details?id=no.nordicsemi.android.mcp&amp;hl=en_GB" rel="nofollow noreferrer">nRF Connect</a> that will let you interrogate the device and let you determine the UUIDs for the Service and Characteristics which will help get you started.</p>
|bluetooth|data-transfer|streaming|smart-watches|
Stream real time data from smart band/watch to computer via Bluetooth
4133
<p><strong>Description</strong>: I have a smart wrist band (<a href="https://rads.stackoverflow.com/amzn/click/com/B07FC49PSV" rel="nofollow noreferrer" rel="nofollow noreferrer">link to Amazon</a>) which has Bluetooth connectivity. My goal is to read real time on my computer (running Ubuntu 18.04) some of the data that is tracked by the wearable device such as the HR or the number of steps. In other words, everytime the desired variable is recorded by the smart band it should also be displayed on my PC monitor.</p> <p><strong>Question</strong>: Unfortunately I am a beginner in this topic and I have no clue about how to do it and if it is even possible. As consequence I would like to ask if you are able to provide some links to possible solutions where I can get some inspiration from? It would be nice that the suggested solutions involved some open-source.</p>
2019-05-15T12:53:34.140
<p>AWS Greengrass is designed to do IoT processing at the edge, rather than (or in addition to) sending it to the cloud.</p> <p>So, if you want to do some processing of your data at the edge, you can use any IoT Edge platform that fulfills your requirements. If Greengrass, Amazon documents where this runs. We have only tested it under Intel Linux.</p> <p>Else, if you just want to forward the data to the cloud, then you just want a "gateway" functionality that packages the data that your device is generating into the format that your cloud platform wants. That is usually MUCH less effort than integrating with an edge platform.</p>
|aws-iot|aws|aws-greengrass|
Can I run a Greengrass Core and an IoT Device on the same machine?
4152
<p>We are trying to create a system that reads and performs some computation on data coming in via serial port (from a CAN network) and send the results to the cloud. I have been looking into AWS Greengrass and am wondering if it would be possible to create a device that does the processing/sending results to the Core, AND a Core that forwards the results to the cloud on the same machine (e.g. a Raspberry Pi)?</p>
2019-05-20T15:44:14.860
<p>This largely depends on the definitions you choose. The criteria for what makes a device an "IoT device" is <a href="https://iot.stackexchange.com/q/99/12">contentious at best</a>. Most definitions would call a Google Home an IoT device if you carefully read through the definitions given, but not everyone agrees on the same definition, so it becomes rather nebulous.</p> <p>We can also look at how a <a href="https://en.m.wikipedia.org/wiki/Wireless_sensor_network" rel="nofollow noreferrer">wireless sensor network</a> is defined, particularly:</p> <blockquote> <p>The base stations are one or more components of the WSN with much more computational, energy and communication resources. They act as a gateway between sensor nodes and the end user as they typically forward data from the WSN on to a server. </p> </blockquote> <p>I would tend to say that a Google Home doesn't fit this definition well - a crucial aspect is that other sensors connect directly to the base station, like a hub. While a hub like SmartThings certainly does this, generally a Google Home doesn't directly link to devices and rather sends messages via remote servers. </p> <p>Ultimately you will need to consult the definitions you want to use to check - but remember that definitions aren't always agreed upon!</p>
|networking|sensors|amazon-echo|google-home|
Is Google home an IoT Device or a WSN base station?
4165
<p>They say <strong>Google home</strong> is an <strong>IoT device</strong> but it has the ability to control other devices like lights, Air conditioners, etc. (in smart homes) which is more like a <strong>base station</strong> in the <strong>Wireless sensor network</strong>. Or is Google home an <strong>integration</strong> of both the technologies together? I think IoT devices are connected to the internet directly. But in the case of smart homes, why are they using Google Home or Echo, etc. ?</p>
2019-05-28T10:10:09.197
<p>For simplicity you are correct. However, those "messaging" protocols are typically only relevant at the IP layer which again for the sake of simplicity typically is understood as the WAN endpoint. When implementing BLE, LoRaWAN, ZigBee you would typically use the read/write/notify/indicate (or equivalent) defined by the standard. The processing overhead of implementing MQTT over BLE would remove much of the benefit of BLE (I won't go into MQTT-SN..) Typically you would transmit local data natively and use a less energy efficient base station (gateway) to reformat the data (JSON/CSV/etc) before publish/POST/etc.</p> <p>There are so many possible implementations that it's near impossible to set a gold standard.. it's about understanding the tools well enough to pick the best combination for the job.</p>
|networking|communication|protocols|
Difference between Network, Communication, and Messaging protocol
4182
<p>My understanding is network protocols are BLE, Wifi, ZIgbee, etc. and messaging protocols are http, mqtt, etc. So my questions are:</p> <ol> <li>Is my understanding so far correct?</li> <li>Are network and communication protocol same and used interchangeably or they mean something else?</li> </ol>
2019-05-29T14:59:03.370
<p>I learned that the best way is to turn RPI into a Z-Wave gateway using Z-Wave controller (USB device that can communicate with Z-Wave devices). It is important to note that one Z-Wave device can be connected to only one network at a time, so in my case I need to disconnect all devices from my current network and connect to the new one.</p> <p>One of the good open source choices to turn your RPI into Z-Wave controller is Domoticz (<a href="http://www.domoticz.com/" rel="nofollow noreferrer">http://www.domoticz.com/</a>). Once this is set up one can connect devices and configure own scenes. Domoticz supports other protocols, not only Z-Wave, so it is a good way to integrate many devices with one home automation system.</p>
|mqtt|raspberry-pi|zwave|
How can I use Raspberry Pi to communicate with my Z-Wave gateway?
4194
<p>I am quite new to Z-Wave. I have a gateway, 2 z-wave sockets and 3 light switches. Everything comes from Keemple brand if that helps. However, using Keemple I cannot automate things the way I would want. I am a programmer so writing scripts in any language is not a problem for me. I was thinking about making use of my raspberry pi. I would like my lights to automatically turn off when my phone disconnects from the Wifi network.</p> <p>I am thinking of 3 possible scenarios:</p> <ol> <li><p>RPI becomes a z-wave gateway and I pair all devices with it. The only problem is that all devices are currently paired with Keemple gateway. Is it possible for one device to be connected to more than one gateway at a time? Once All z-wave devices are connected to RPI and the gateway, I could communicate directly from RPI and do some scripting there.</p></li> <li><p>RPI becomes a z-wave device (sensor). It would have a boolean values that would represent the states that I am interested in. Those states would contain values that would come from the scripts I would write. I would need to pair it somehow (not sure if possible) with my Keemple gateway and set up a scene for switching off all lights when phone becomes unreachable.</p></li> <li><p>RPI communicates with the gateway over TCP/IP. However I am not sure if it is possible. I learned about MQTT, but I cannot figure out the way to connect to my gateway over that protocol...Possibly it is only available for Keemple cloud and not for me to use.</p></li> </ol> <p>Are both scenarios possible? Which way should I go to achieve what I need?</p>
2019-06-11T08:52:31.887
<p>I have posted this question in nordic devzone and I got this answer:</p> <blockquote> <p><a href="https://devzone.nordicsemi.com/f/nordic-q-a/48518/softdevice-handler-h-is-missing" rel="nofollow noreferrer">https://devzone.nordicsemi.com/f/nordic-q-a/48518/softdevice-handler-h-is-missing</a> The same question has been answered before so I am posting the other answer as well, as it has longer discussion: <a href="https://devzone.nordicsemi.com/f/nordic-q-a/48025/not-finding-softdevice_handler-h-in-nrf52_sdk_15-0-3" rel="nofollow noreferrer">https://devzone.nordicsemi.com/f/nordic-q-a/48025/not-finding-softdevice_handler-h-in-nrf52_sdk_15-0-3</a></p> </blockquote>
|bluetooth-low-energy|
How to set up the Timeslot API for AES-CCM peripheral use
4221
<p>I am using a nordic BLE SOC: nrf52 dk, along with s130 and sdk 14.2.0 I want to secure an advertised BLE packet with AES CCM encryption. The board contains a co-processor for AES calculation that is not accessible while using the soft device that I am using to generate packets and advertise.</p> <p>The solution is to use timeslot API but I do not know how to do it. There is <a href="https://devzone.nordicsemi.com/nordic/short-range-guides/b/software-development-kit/posts/setting-up-the-timeslot-api" rel="nofollow noreferrer">a tutorial</a> but it's only valid for nRF5 SDK 11.</p>
2019-06-22T13:28:33.677
<p>You don't need to open any ports to connect to an external broker on a normal NAT'd internal network (e.g. a normal domestic ADSL setup.).</p> <p>As long as your network allows all outbound connections (and related replies) then it should all just work.</p> <p>This is because all MQTT connections are initiated by the client and are then persistent until the client closes the connection. Messages for subscriptions just flow back down this existing connection.</p> <p>If you need to explicitly allow outbound ports then the default port is 1883.</p> <p>If you are on a more locked down network, e.g. a corporate network that requires you to use a proxy to reach the outside world then you have 2 choices.</p> <ol> <li>You need an OSI layer 5 proxy e.g. socks</li> <li>If you only have access to a HTTP proxy then you have to hope that your external broker and client supports MQTT over Websockets.</li> </ol>
|mqtt|
Connect to MQTT Server without opening port
4251
<p>I have a MQTT client in an internal network and an MQTT server somewhere in Cloud. How can I connect to MQTT Server without opening port in the Client network?</p>
2019-06-26T19:38:25.217
<p>One way to get some isolation between your wired networks is for each of you to have your own dedicated router. The WAN port of each of your routers is wired to a LAN port of the ISP's router.</p> <p>To the ISP router, there are only two clients, Router You and Router Brother. Each of the You and Brother routers are configured as normal for a router. They have their DHCP servers enables. They perform DNS caching. They have their WiFi on with your choice of SSIDs and credentials, different for each of you. When a request from your side is made, it is NAT'ed out of your router and into the common ISP router. There, it is NAT'ed again and presented to the wild Internet. Same for your brother.</p> <p>There is no internetworking between you and your brother. You share an ISP connection.</p> <p>Although anything is probably hackable, short of exploiting a bug in the router, neither of you can probe the other's network.</p>
|routers|chromecast|
Router connected to a router: how to cast different things in each router?
4260
<p>My brother is my neighbor and we are saving some money sharing internet, but I don't want to share anything else, just internet.</p> <p>What I did is to pass a cable through the wall that connects his router —that has access to internet— to my router. I created my own wifi name and password.</p> <p>Today my brother-in-law has been listening to online radios and I had the notification in my android phone telling me that someone was casting in a wireless sound system.</p> <p>I checked my router entering its IP in the explorer url box and saw that someone "unknown" was connected to my network.</p> <p>I would like anything connected to my wifi to stay in a different network, being not able to see what my brother is doing (and vice-versa)</p> <p>How should I start?</p>
2019-06-28T18:48:49.397
<p>This is a very good question, I'd like to offer my point of view in the matter.</p> <p>Arm has designed their processor with the embedded world as target, so they thought about every thing with this target in mind:</p> <ul> <li>size</li> <li>energy consumption</li> <li>instructions</li> <li>ease of use</li> <li>scalability </li> </ul> <p>I'm mostly working with Linux, and when you're developing product with Arm it's way easier than with x86. Every thing is in place and ready to make you gain some time. First their is a huge community, and you'll find plenty of resources to help you when you're stuck. And also the fact that it's the industry standard so you won't struggle with anything too exotic when working with arm, you'll have all the drivers and any kind of eval boards, SoCs and SoM that you would need. Add in top of all that that almost all embedded engineers know their ways with arm, so if you want to push with another architecture you'll have to really have a good technical reason. </p> <p>Companies that use other architecture do so mostly in legacy of former product and because of the company engineer's knowledge. </p> <p>To sum up, I think that arm is the easiest choice when developing a new product, but you can also have good reason to use other architectures (legacy or very specific needs for the product that are available only in a specific architecture).</p>
|hardware|standards|arm|
Why does the ARM architecture dominate the IoT market?
4273
<p>I would like to understand why we use ARM for routers, cell phones, cameras, refrigerators, smart TVs, and everything, instead of using any other architecture like x86.</p> <p>What are the advantages of using ARM for these things? What would be the problems of simply using x86? Is it all about cost, size and energy?</p>
2019-07-02T15:27:01.690
<p>Given the answers in the comments then using git over ssh is probably the best option.</p> <p>Firstly git will only pull the differences between the current head hash and the head on the remote. Given you are pulling updates to python scripts these should just be text files so the diffs should be simple.</p> <p>I say pulling over SSH because you can enable compression on the whole link in the client (and server) settings which help to reduce the traffic size.</p> <p>SSH means you can also install a ssh key on each machine that is locked down to only allow read only access to the git repo which should help to limit the security exposure.</p> <p>Rsync also only transfers the differences between files by default and can also run over SSH so can use the built in compression as well. But Rsync is going to need to send file size/date/checksum info for each file which is likely to consume more network resource than just checking if the git head hash has changed when there hasn't been an update.</p>
|raspberry-pi|data-transfer|mobile-data|over-the-air-updates|
remotely update a raspberry pi with limited data
4283
<p>What's the proper hence best way to push updates to a Raspberry-pi with a limited Mobile Data Cap (1mb/ day). </p> <p>My first plan was to set up a Git repository and pull files from it once a week for example. The problem with that is that it's not frequent enough and still pull the whole repository even when no change is made which might eat up data.</p> <p>I am thinking about something more precise where I can see which file changed and download only that specific files.</p> <p>Ssh into the device is not an option as there will be around a hundred of them.</p> <p>Any suggestion?</p> <p>thanks,</p>
2019-07-03T05:59:38.033
<p>There is also the <a href="https://luftdaten.info/en/home-en/" rel="nofollow noreferrer">luftdaten.info</a> project which is a open-source particle sensor with its own firmware. They do a similar thing to what you proposed, only without the config button. They do so, by starting the web server per default, when the device is powered on. After a certain threshold (I think it's somewhere between 3-10 minutes), the internal web server will be shut down - so no more configuration is possible until the next power cycle.</p> <p>This solution might be too insecure for certain scenarios, but you might want to know about nevertheless.</p> <p><strong>Edit:</strong></p> <p>To get the initial configuration into the device, the following procedure is in place:</p> <p>When the device is booted, it tries to reach the configured WLAN (there is no configured WLAN at the first startup). If it fails to connect to the pre-configured WLAN, it sets a static IP and spans it's own wireless network without a password, where one can connect to and do the initial configuration via the static IP address.</p>
|wifi|esp8266|arduino|
What is a good way for an IoT device to receive its network settings?
4284
<p>I'm in the process of building an IoT device using ESP8266. The device will eventually contain a couple of motors, and I would like to control these motors using MQTT. I would like to make the device as cheap as possible, so I would like to avoid things like displays and keyboards. </p> <p>So, when the device is turned it is supposed to connect to the local WiFi, and then to an MQTT broker. But how does it know about the local SSID to connect to, and what about username and password? Since the device has no display or keyboard, there is no way to input these things. And how does the user know if the device was able to connect or not? For troubleshooting, it would be nice if the device at least had some way to indicate what the problem might be.</p> <p>The solution I have thought of is to have one button and one LED on the device. The button would be marked "config" or similar. When the button is pressed, the device will start operating as a WiFi access point with a predefined SSID. It will have a webserver, so the user can connect with a laptop or phone to this predefined access point and enter the local network settings (SSID, username and password) as well as the address for the MQTT broker. The LED will be used to indicate the mode of operation, and also as error indication. For example, when the LED is glowing steady everything is connected, long flashes means it is in config mode, short flashes means there is an error, or something similar.</p> <p>My question is, is the solution I have proposed a standard way of doing things when it comes to this problem? I.e. will it feel like a familiar flow to the user, or would some other way be better? After doing a little bit of searching I have found e.g. <a href="https://support.lifx.com/hc/en-us/articles/204538340-LIFX-Bulb-Setup" rel="noreferrer">this</a>, which seems to be a similar user flow. Would still be interested in hearing what experience you have of this, and hear how you would solve it?</p> <p>If it turns out that this is a good and useful way to solve this problem, and since it is a kind of generic solution, it would be nice to not reinvent the wheel too much. I'm thinking that there could be a library that could do all of this. The library would be configured with the input pin for the button and output pin for the LED, and then take care of the rest. It could be built upon the <a href="https://github.com/knolleary/pubsubclient" rel="noreferrer">PubSubClient</a> library and based on the tutorial <a href="https://techtutorialsx.com/2017/04/09/esp8266-connecting-to-mqtt-broker/" rel="noreferrer">ESP8266: Connecting to MQTT broker</a>. So is there a library that does this or something similar? If not I'll take a stab and create my own, but would like to hear about what's out there first.</p> <p>Thanks!</p>
2019-07-03T15:30:04.737
<p><strong>Option 1: idf.py</strong><br /> If you have an <strong>esp-idf</strong> project you can simply run this command. You must be in your project folder, e.g. <em>'c:\my-esp32-projects\sample-project'</em></p> <pre><code>idf.py partition-table </code></pre> <p>and the partition table will be print in console like this:</p> <pre><code>Partition table binary generated. Contents: ******************************************************************************* # ESP-IDF Partition Table # Name, Type, SubType, Offset, Size, Flags nvs,data,nvs,0x9000,24K, phy_init,data,phy,0xf000,4K, factory,app,factory,0x10000,1M, ******************************************************************************* </code></pre> <p><strong>Option 2: Espressif-IDE</strong><br /> Use the Espressif-IDE, right-click on your project and choose <em>ESP-IDF: Partition Table Editor</em>, this way you have a graphic window where you can view/edit the partition table for your esp32 application <a href="https://i.stack.imgur.com/uPl6a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uPl6a.png" alt="enter image description here" /></a></p>
|esp8266|esp32|flash-memory|
How can I list the partition table of a currently running esp32 devboard?
4287
<p>I am developing on an ESP32 devboard (esp8266, wroom). I need to get the partition table <em>of the currently running device</em>.</p> <p>This document contains a quite good documentation of its partition table. I can also read/write flash regions by the <code>esptool.py</code> and <code>parttool.py</code> tools. These can manage and modify the table well.</p> <p>However, I did not find a way to read its partitions. How can I do it?</p>
2019-07-04T14:59:05.730
<p>i found different solution to this in python</p> <p>client.py</p> <pre><code> class Messages(threading.Thread): def __init__( self, clientname, broker=&quot;127.0.0.1&quot;, pub_topic=&quot;msg&quot;, sub_topic=[(&quot;msg&quot;, 0)], ): super().__init__() self.broker = broker self.sub_topic = sub_topic self.pub_topic = pub_topic self.clientname = clientname self.client = mqtt.Client(self.clientname) self.client.on_connect = self.on_connect self.client.on_message = self.on_message self.client.on_subscibe = self.on_subscribe self.received = {} self.topicTemp = &quot;&quot; &lt;==== define temporary topic name def on_connect(self, client, userdata, flags, rc): if rc == 0: print(&quot;Server Connection Established&quot;) else: print(&quot;bad connection Returned code=&quot;, rc) self.client.subscribe(self.sub_topic) def on_subscribe(self, client, userdata, mid, granted_qos): print(&quot;Subscription complete&quot;) def on_message(self, client, userdata, msg): if self.topicTemp != msg.topic: &lt;=== compare incoming topic name self.received[msg.topic] = { &lt;=== create new element of received dict &quot;topic&quot;: msg.topic, &quot;payload&quot;: str(msg.payload.decode()), } self.topicTemp = msg.topic &lt;== store handlede topic to compare next one def begin(self): print(&quot;Setting up connection&quot;) self.client.connect(self.broker) self.client.loop_start() # self.client.loop_forever() def end(self): time.sleep(1) print(&quot;Ending Connection&quot;) self.client.loop_stop() self.client.disconnect() def send(self, msg, topic=None): if topic is None: topic = self.pub_topic self.client.publish(topic, msg) def get(self, msg, topic=None): if topic is None: topic = self.sub_topic[0] self.client.subscribe(topic, msg) # self.send(msg, topic) def main(): remote = Messages(clientname=&quot;PC&quot;, broker=&quot;127.0.0.1&quot;) remote.begin() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>function.py</p> <pre><code> from mqtt_client_test import Messages import time remote = Messages( clientname=&quot;Camera&quot;, broker=&quot;127.0.0.1&quot;, pub_topic=&quot;pc/camera&quot;, sub_topic=[(&quot;pc/camera&quot;, 0), (&quot;commands/detect&quot;, 1)], ) remote.begin() while True: msg = remote.received # print(msg) if &quot;commands/detect&quot; in msg: print(&quot;command&quot;, msg[&quot;commands/detect&quot;][&quot;payload&quot;]) if &quot;pc/camera&quot; in msg: print(&quot;camera&quot;, msg[&quot;pc/camera&quot;][&quot;payload&quot;]) time.sleep(1) </code></pre> <p>Now you can get payloads from different topics in one on_message function and can extract how you want to use somewhere else...</p>
|mqtt|eclipse-iot|
How to extract values of multiple topics in onMessageArrived(message) function of Paho MQTT JavaScript API?
4295
<p>I have created a Publisher Script using <a href="https://www.eclipse.org/paho/clients/js" rel="noreferrer">Paho MQTT JavaScript API</a> which publishes values to two topics <strong>MyHome/Temp</strong> and <strong>MyHome/Hum</strong>. The script is running successfully and publishing data to <a href="https://www.cloudmqtt.com/" rel="noreferrer">CloudMQTT</a> broker. In my Subscriber script I have subscribed to these two topics and printing them in Console as following:</p> <pre><code>function onConnect() { console.log("onConnect"); client.subscribe("MyHome/Temp"); client.subscribe("MyHome/Hum"); } function onMessageArrived(message) { console.log(message.destinationName +" : "+ message.payloadString); } </code></pre> <p>It is printing both the topic names and corresponding values. Now I want to extract the values of both topics using message.payloadString and store in variable as following:</p> <pre><code>function onMessageArrived(message) { var temp = message.payloadString; var hum = message.payloadString; ... } </code></pre> <p>But I am getting only on value in both variable i.e. the value of last topic 'hum'. Can anyone please help me solving this.</p>
2019-07-05T05:01:36.837
<p>For transporting image data:</p> <p>If crops to be monitored for difference over time, then extracted images can be diffed and only the difference-patch can be uplinked.</p> <p>Currently, lora website has given a document for fragmented data block transport. For this question, subsequent images can be seen as a patch and class-C fragment data transport using a Class-C multicast session can be used to uplink the diff-image.</p> <p><a href="https://lora-alliance.org/resource_hub/lorawan-fragmented-data-block-transport-specification-v1-0-0/" rel="nofollow noreferrer">https://lora-alliance.org/resource_hub/lorawan-fragmented-data-block-transport-specification-v1-0-0/</a></p>
|networking|lora|lorawan|
Is it a good option to send images over LoRa network?
4298
<p>LoRa (Long Range) technology is one of the promising technology which offers long range communication with low power consumption. </p> <p>Therefore when building a device that can be deployed in a no network area or to achieve long distance such as 5km or more with low power consumption feature it is considered to be a suitable technology. </p> <p>However if I am planning to build a system that should assist sending images to remote location over long range or in no network area (i.e. rural area, agricultural fields), what is the better option? </p> <p>Is it a good practice to use LoRa for sending images? Since it has low bandwidth, how can a large image be sent? What is the maximum size that could be sent?</p> <p>In detail, I want to capture images of crop from the field and send it for analysis on cloud server. So if I am sending image (1-2MB size) in small parts, how many attempts will it take if the distance between sensor node and gateway is around 2KM? How can I assure that all parts will be transmitted successfully? </p> <p>Practically I have seen that while out of 10 packets of simple text message, 1-2 packets are usually lost even in closest distance of 10 ft. I don't know, may be due to I have not yet used gateway as receiving and rather that I have used SX1278 Lora module as both sending and receiving devices in 1-1 communication.</p>
2019-07-07T22:04:37.157
<p>Without seeing your code so we know what topics you are subscribed to this is really hard to answer.</p> <p>But, yes this is a totally public broker with a single shared topic space for all users. So it is very likely that you are receiving a message published by a previous user.</p> <p>It should really only be used to test MQTT client implementations and not for anything of value since at any time anybody could publish to any topic or it could go down (the eclipse broker is set to change URL and implementation soon [July 2019])</p>
|mqtt|eclipse-iot|
Eclipse broker publishes off on subscribe
4303
<p>I'm testing an MQTT client setup with the <a href="http://iot.eclipse.org" rel="nofollow noreferrer">Eclipse</a> test server. I noticed <code>off</code> is sent automatically to every topic my client subscribes to, and the <code>retain</code> flag is set. I can see the logic behind it but so far, I haven't found this broker feature documented anywhere so I was wondering if anyone has had a similar experience or has more details about any documentation for this broker. Is it intended or is the public broker topic-space just global and I'm getting someone's messages?</p>
2019-07-10T17:24:35.920
<p>Many of these types of product are intended for use on exterior doors so should be weather proof (when installed correctly).</p> <p>I would point out that they are probably intended to be installed vertically (e.g. in a front door) rather than horizontally (e.g. in the top of a box) where water might pool on the surrounding surface or the face of the product</p>
|smart-lock|
Smart Lock | Exterior Use
4314
<p>I have what may be a crazy idea, which was inspired after both my neighbours had deliveries of online purchases swiped off their porch...not cool. I want to build a box for couriers to put stuff in, and I want to put a lock on it. I have read about smart locks that can have temporary passwords set (some examples may be found <a href="https://www.homedepot.com/b/Smart-Home-Smart-Home-Security-Smart-Locks/N-5yc1vZc7by" rel="nofollow noreferrer">here</a>). Now, my question is has anyone done this, and would these locks need to be tucked out of the elements (like under a roof) so that rain and snow (big issue where I live) wont get on them? </p> <p>Now, being a new contributor to this page, I don't know if this is a place for product recommendations - so what I am asking for specifically is things to look out for or design considerations other than "box" which is all I have in mind as of now.</p>
2019-07-11T05:33:30.203
<p>I also bought myself an <a href="https://www.espressif.com/en/products/hardware/esp-wrover-kit/overview" rel="nofollow noreferrer">ESP-WROVER-KIT-VB</a>.</p> <p>It's pricey, <a href="https://sunhokey.en.alibaba.com/product/62571938928-221082473/ESP_WROVER_KIT_VB_ESP32_wrover_development_board_with_3_2_inch_LCD.html" rel="nofollow noreferrer">$39.99 on AliExpress</a>, <em><strong>but</strong></em>, it has a large screen. Here it is, playing Doom !</p> <p><a href="https://i.stack.imgur.com/dArx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dArx5.png" alt="enter image description here" /></a></p> <p>Here's a <a href="https://www.youtube.com/watch?v=Gb_JFDa0AIo" rel="nofollow noreferrer">youTube link</a> to the vieo.</p> <p>The best, though, IMO, is the onboard debugger, which lets me load my program to the IDE (Visual Studio Cod + PlafotmIO), and set and run to breakpoints, examine he call stack when they hit, plus examine &amp; change variable values, etc.</p> <blockquote> <p>Description:<br /> The ESP-WROVER-KIT from Espressif supports the most distinguishing features of the ESP32. Whether you need external SRAM for IoT applications, or an LCD+camera interface, it has you covered!<br /> The ESP-WROVER-KIT is a newly-launched development board built around ESP32. This board comes with an ESP32 module already. The V3 version of this kit now comes with the ESP32-WROVER. This module come with an additional 4 MB SPI PSRAM (Pseudo static RAM)<br /> The ESP-WROVER-KIT features support for an LCD and MicroSD card. The I/O pins have been led out from the ESP32 module for easy extension. The board carries an advanced multi-protocol USB bridge (the FTDI FT2232HL), enabling developers to use JTAG directly to debug the ESP32 through the USB interface. The development board makes secondary development easy and cost-effective.</p> <p>Features:<br /> ESP32 is engineered to be fast, smart and versatile. The ESP-WROVER-KIT complements these characteristics by offering on-board high speed Micro SD card interface, VGA camera interface, as well as 3.2” SPI LCD panel and I/O expansion capabilities.<br /> Bogged down by bugs. The ESP32 supports JTAG debugging, while the ESP-WROVER-KIT integrates a USB debugger as well. This makes debugging and tracing complex applications very easy, without the need for any additional hardware.<br /> Have you been developing your applications around the ESP-WROOM-32 module? Not only does the ESP-WROVER-KIT support the popular ESP-WROOM-32 module, but it also supports the new ESP32-WROVER module!</p> <p>Specifications:<br /> Dual core 240 MHz CPU;<br /> with 4 MB SPI PSRAM (Pseudo static RAM);<br /> Built-in USB-JTAG Debugger;<br /> 3.2” SPI LCD panel;<br /> Micro-SD card interface<br /> VGA camera interface<br /> I/O expansion<br /> WiKi: <a href="http://wiki.52pi.com/index.php/ESP32_WROVER_KIT_SKU:_EP-0090" rel="nofollow noreferrer">http://wiki.52pi.com/index.php/ESP32_WROVER_KIT_SKU:_EP-0090</a></p> </blockquote>
|esp32|platform-io|
Seeking ESP32 with display which is supported by the PlatformIO debugger
4318
<p>I am switching to Platform IO as my ESP32 IDE. Alas, none of my boards are supported by the debugger. See <a href="https://docs.platformio.org/en/latest/plus/debugging.html#piodebug" rel="nofollow noreferrer">https://docs.platformio.org/en/latest/plus/debugging.html#piodebug</a></p> <p>To save me along time searching, does anyone know of an ESP32 with built in display which is supported? I don't care which board, all that I need is WFIi and BT (or LoRa) and a built in display.</p> <p>Please mote that I am not asking for "best" or anything opinion based. The accepted answer will be the first posted. I am hoping that one of you is already using the PlatformIO debugger on an ESP32 with built-in display.</p>
2019-07-12T07:46:59.697
<p>Yes, LoRa can do point to point communication. There are many examples of people using LoRa between 2 devices for low power, low bandwidth notifications between devices e.g. <a href="https://www.youtube.com/watch?v=WV_VumvI-0A" rel="nofollow noreferrer">https://www.youtube.com/watch?v=WV_VumvI-0A</a></p> <p>The hub/spoke topology is normally associated with the LoRaWAN implementations (e.g. <a href="https://www.thethingsnetwork.org/" rel="nofollow noreferrer">The Things Network</a>) used for building wide area support for devices to communicate with a cloud backend.</p>
|lora|
Can LoRa end points communicate directly?
4321
<p>Can LoRa end points communicate directly? </p> <p>Or is a gateway/network server always required?</p>
2019-07-15T15:33:32.703
<p>Google Assistant is not something you buy, you buy devices that support Google Assistant to varying degrees (e.g. a Nest Home Hub Max supports just about everything possible, where as a Google Home Mini supports less because it doesn't have a screen or a camera).</p> <p>Nearly all modern Android phones support Google Assistant, which will respond to the "OK, Google" keyword or via text, squeeze (pixel phones), custom button. All the features that you could trigger via a Google Home device can be triggered by interacting with the Google assistant on the phone.</p> <p>You can also install the Google Home app (which is needed to set up new devices anyway)</p> <p>As for the TV, I can't see any mention of Android TV support in the listing, so it won't have any direct integration with the Google Assistant Smart Home control. I have to assume that you will be plugging the Chromecast into a HDMI port. Assuming that at least one of the HDMI ports supports HDMI CEC then you will be able turn the TV on/off and switch to the Chromecast input (but not away from it).</p> <p>Voice control of playing content will depend on what services you are signed up for. By default the assistant can search YouTube for video but "Browse" tab (3rd from the left at the bottom of the screen) in the Google home app offers me content from all the video apps I have installed (on the phone) that support casting to a Chromecast device.</p> <p>Currently all control of devices (except Chromecast) are routed via the cloud from the Google back end to the device manufacturer's system then to the device, but Google have recently announced support for local control, where commands will be issued from across the local network to devices, it is not yet clear if this will also be possible via a phone or will require a device like a Google Home. (The stated reason for this is to reduce latency and it will fall back to cloud control)</p>
|smart-home|google-home|google-assistant|smart-assistants|smart-tv|
Connecting Devices In Home
4334
<p>I am quite new to the Internet of Things, and my wife bought a <a href="https://www.amazon.ca/TCL-43S425-CA-Ultra-Smart-Television/dp/B07DXT8XDQ/ref=sr_1_2?crid=IC2HE4ECEZLX&amp;keywords=tcl%20smart%20tv&amp;qid=1563203578&amp;s=electronics&amp;smid=A3DWYIK6Y9EEQB&amp;sprefix=tcl%20smart%20%2Celectronics%2C182&amp;sr=1-2" rel="nofollow noreferrer">TV</a> today (a pretty good deal) from Amazon Prime Day - I included the link so that we are talking about the same device. Now, in the home already we have a Chromecast (Gen 1), and bought but not installed yet locks for the front and rear doors. All of which can be controlled using Google Home. We have however, not bought a device like a Google Home to work as a Hub (I worked for a supplier to Huawei among others, while living in China and know what a lot of data collected is used for, so I am a littler paranoid about that kind of tech).</p> <p>Also, please note that as we continue to do renovations in our home we will be adding ceiling fans, thermostats, cameras, etc. all of which ideally could be linked and controlled through Google Home.</p> <p>My question is this: in order to connect all of these (current and future) devices do I need to buy a "Hub"? I have read online and haven't been able to find a definitive answer. My main concern <strong>right now</strong> is being able to control the TV from my phone without having to use a different app (Roku) as I like the simplicity of the Google Home App. </p> <p>In short - should I invest in some type of "Hub" or do I not need to?</p>
2019-07-15T21:01:38.003
<p>MQTT is probably the right answer.</p> <p>Each ESP can publish to the broker with a topic structure something like:</p> <pre><code>client1/sensor1 client1/sensor2 </code></pre> <p>You can then use MQTT over Websockets to subscribe to all the topics (either each topic separately or with the <code>#</code> wildcard).</p> <p>Since each message from the sensor arrives on it's own topic you can then use the topic to determine which bit of the page to update.</p> <p>For desktop or mobile apps you can either use native MQTT or MQTT over Websockets</p>
|mqtt|esp8266|web-sockets|
How to send continuous data from a few ESP8266 to a webserver?
4337
<p>I am new to ESP8266 boards and IoT programming and I don't know how to describe better what I want to do without a picture. </p> <p><strong>Question: How to send continuous data from 4 esp8266 WiFi clients to a webpage or to an application which handles all of them in parallel without introducing noticeable delays?</strong> </p> <p>So, I need 4 clients (users), each having an <a href="https://www.aliexpress.com/item/32802874451.html?spm=a2g0s.9042311.0.0.24844c4dwmzJFi" rel="nofollow noreferrer">ESP8266 ESP-12E NodeMcu CP2102</a> board with 3 sensors (analog /digital sensors). These nodes should send continuous data or at least faster than human average reaction time (i.e. 250ms). These clients should behave as players do in a multiplayer online game for example. Or I don't know, maybe like 4 WiFi Gaming Controllers.</p> <p>I need the system to be very reactive (without having a noticeable delay in displaying the data). For example, if User4 touches the Sensor3 (which can be also a simple button), the webpage / the application should sense this immediately while still displaying/handling the data from the other users. That's why I want to send the data to the webpage or to the application (C# app or Android App) as a long string even though some users are inactive or if their sensors inputs didn't change during a frame. </p> <p>Sensor1 can be a pulse sensor, Sensor2 a microphone and sensor3 a button or a touch sensor. It doesn't matter so much and I'm not decided yet on this. </p> <p>My problem is that I don't know how to approach this. I did some tutorials with WebSockets and one tutorial with MQTT, but I'm still very, very confused. I don't know if it's possible. </p> <p>One WebSocket tutorial I've followed is this one (I also found some typos in the code):<br> <a href="https://esp8266-shop.com/blog/websocket-connection-between-esp8266-and-node-js-server/" rel="nofollow noreferrer">https://esp8266-shop.com/blog/websocket-connection-between-esp8266-and-node-js-server/</a></p> <p>The MQTT tutorial: <br><a href="https://esp8266-shop.com/blog/configure-mqtt-runing-on-esp8266-for-home-automation/" rel="nofollow noreferrer">https://esp8266-shop.com/blog/configure-mqtt-runing-on-esp8266-for-home-automation/</a> </p> <p><a href="https://www.aliexpress.com/item/32846568600.html?spm=a2g0s.9042311.0.0.ab714c4dVZuUXZ" rel="nofollow noreferrer">1400mAh LiPo Batteries</a></p> <p><a href="https://www.aliexpress.com/item/32477478565.html?spm=a2g0s.9042311.0.0.ab714c4dVZuUXZ" rel="nofollow noreferrer">3.7V to 5V @ 2A DC-DC boost convertors</a></p> <p><a href="https://i.stack.imgur.com/cVE5K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cVE5K.png" alt="Multiple ESP8266 sending a continuous stream of data"></a></p>