CreationDate
stringlengths
23
23
Answer
stringlengths
60
24k
Tags
stringlengths
5
68
Title
stringlengths
15
140
Id
stringlengths
1
4
Body
stringlengths
106
19.3k
2022-07-30T07:02:02.310
<p>Data transfers have to be initiated by the GPRS module; your cellular provider will have a firewall that blocks all connection requests that originate from the internet.</p> <p>The usual way round this is to send an SMS to the tracker, which can either return its position in an SMS, or be triggered to initiate contact with a server.</p>
|tracking-devices|python|
How to initiate connection to a GPRS enabled tracking device
6373
<p>I am trying to work with a few GPS-enabled tracking devices one of them being a TK-303 GPS tracker that uses SIM-powered GPRS to access the internet. currently, I can receive messages on my python TCP socket server. but when I tried initiating a connection to the devices using the public address that is broadcasted when the device sends a message, the connection wouldn't connect. my question is how is this done, I would like insight on how to go about doing this. thank you.</p>
2022-08-12T17:33:58.720
<p>After the recommended reset, I tried it again not on the charger. That worked for me.</p>
|smart-home|hardware|kitchen-appliances|
Why can't I update my Ember mug's firmware?
6388
<p>My Ember mug has a firmware update available, but when I try to update it the process fails at 50% during a reconnection step. How do I make sure the update is able to finish?</p>
2022-08-18T05:09:46.187
<p>First and foremost, any theoretical calculations are only valid if you have <strong>line-of-sight (LoS) between transmitter and receiver</strong>. As soon as you have an obstacle, it is nearly impossible to <em>compute</em> it, you'll only be able to <em>measure</em> it to take into account the related losses.</p> <p>Second, you need not only direct line of sight to be free of obstacles, but you need the <strong><a href="https://en.wikipedia.org/wiki/Fresnel_zone" rel="nofollow noreferrer">Fresnel zone</a> to be free</strong> as well. At 433 MHz, the radius of the Fresnel zone at 1 km is 13 meters. At 100 km it is over 130 meters.</p> <p>Taking into account this and the curvature of the Earth, that means that for any significant distance, even on flat terrain without any obstacles, you need one or both of the devices to be quite high above ground. All LoRa distance records have involved either balloons or devices very high above ground (on top of mountains or towers).</p> <p>Now, for the calculation...</p> <p>What you need is a link budget calculator, which uses the link budget equation to compute the result. There are plenty on the web, though they may not work in the direction you need (they are often distance -&gt; budget rather than budget -&gt; distance), but you can always do a binary search.</p> <p>The inputs you will need are:</p> <ul> <li>The TX power (10 dBm apparently)</li> <li>The losses in cabling etc. on the TX side (unknown)</li> <li>The TX antenna gain (unknown, but it's probably an isotropic antenna, so about 2.1 dB)</li> <li>The RX antenna gain (the same as the TX antenna?)</li> <li>The losses in cabling etc. on the RX side (unknown)</li> <li>The RX sensitivity (-148 dBm)</li> </ul> <p>Counting 0 for the losses in cabling etc, the link budget without free space path loss is 10 + 0 + 2.1 + 2.1 + 0 - -148 or 162.2 dB</p> <p>So your max distance is where the free space path loss (FSPL) is 162.2 dB. FSPL in dB is 20log10(d) + 20log10(f) - 147.55 (with d in meters and f in Hz)</p> <p>With f = 433 MHz, FSPL = 20log10(d) + 25.18, which means d = 10^((FSPL - 25.18)/20) or a bit over 7000 km.</p> <p>Note that there are quite a few assumptions here:</p> <ul> <li>That misc losses are 0. Most probably not true</li> <li>That there are no obstacles in the line of sight but <strong>also</strong> in the Fresnel zone.</li> <li>That sensitivity is actually -148 dBm (this is usually only achieved for the very very very slow data rates)</li> <li>That only RSSI is relevant, and not SNR (in which case noise becomes an issue).</li> </ul> <p>In practice, what will determine the max distance is most likely to be the terrain. There are tools which allow you to enter points on a map and see if a given link is feasible. Other tools allow you to see the theoretical coverage from a given point.</p> <p>Note, again, that all this is only valid if both devices are outdoors with strictly no obstacles between them. Anything where one or both devices are indoors, or there are any obstacles (terrain, trees, buildings) will very, very, very significantly reduce the max distance (it can go down to a few dozen meters!).</p>
|lora|
Calculation LoRa range with RA02 chip
6391
<p>I want to use LoRa RA-02 based on SX1278 with 433MHz.</p> <p>In some country I can send data in 10db max and I will connect 1/4wavelength (17cm) of antena.</p> <p>In RA-02 write <code>high sensitivity of over -148dBm</code></p> <p>How can I calculate the maximum distance between transmitter and receiver?</p>
2022-09-29T11:01:44.730
<p>There is nothing in the MQTT protocol for end to end delivery notification.</p> <p>This is because when a message is published there may be</p> <ul> <li>0 clients subscribed to the topic</li> <li>N clients subscribed to the topic</li> <li>N offline clients with a persistent subscription to the topic</li> </ul> <p>This means that end to end notification would have to handle all 3 of the use cases, where a notification could:</p> <ul> <li>never come</li> <li>come from 1000s of client</li> <li>come from some clients then some more at any time in the future</li> </ul> <p>If you need end to end then you need to publish a second acknowledgement message from the receiving client.</p>
|mqtt|
Is there a way to be sure that the message is received by the subscriber in MQTT
6423
<p>As I understand, MQTT QoS levels are applicable on the connection between Publisher and Broker or Broker and Subscriber. QoS levels are nothing to do with the connection between Publisher and Subscriber directly. That means, a Publisher can't be sure that if a specific Subscriber received the message or not. The only thing the Publisher know that the Broker has received the message if appropriate QoS level is provided. Is that true?</p>
2022-10-11T14:53:36.747
<p>The easy way you can use http request when you detect something, while running the python app, you send a <strong>POST request</strong>, this link could help you:</p> <p><a href="https://stackoverflow.com/questions/10313001/is-it-possible-to-make-post-request-in-flask">https://stackoverflow.com/questions/10313001/is-it-possible-to-make-post-request-in-flask</a></p> <p>in the ESP32 you will receive the request since you have esp_http_server you must add a condition of what to do when you receive the request.</p>
|esp32|python|gpio|
How to control the state of GPIO pins on ESP32 Camera from an external python script?
6434
<p>I am currently working on a project that detects specific objects in a video stream. If a particular object is detected, an alarm of some sort will be rung. An HTML page will display the object detection output and the name of the object detected. I have used flask with python for the HTML page. The video from the ESP32 Camera is given to a python script through a URL; then, the output will be displayed on the HTML page. I am not using the camera webserver rather using a flask web app. I have completed the object detection part and the object detection output successfully. But I can't seem to figure out a way to control the alarm system attached to the ESP32-Cam through the python script. This is my python code</p> <pre><code>from pickletools import read_uint1 from flask import Flask, render_template, Response import cv2 import cvzone #url = 'C:/Users/Mukesh/Downloads/videoplayback.mp4' url = '192.168.35.7:81/stream' classNames = [] classFile = 'C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/coco.names' with open(classFile, 'rt') as f: classNames = f.read().split('\n') configPath = 'C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt' weightsPath = &quot;C:/Users/Mukesh/Desktop/Mini Project/Code/AIM/frozen_inference_graph.pb&quot; net = cv2.dnn_DetectionModel(weightsPath, configPath) net.setInputSize(320, 320) net.setInputScale(1.0 / 127.5) net.setInputMean((127.5, 127.5, 127.5)) net.setInputSwapRB(True) app = Flask(__name__) thres = 0.55 nmsThres = 0.2 cap = cv2.VideoCapture(url) cap.set(3, 640) cap.set(4, 480) def gen_frames(): while 1: isTrue, img = cap.read() img = cv2.flip(img, 1) if img is not None: classIds, confs, bbox = net.detect( img, confThreshold=thres, nmsThreshold=nmsThres) try: for classId, conf, box in zip(classIds.flatten(), confs.flatten(), bbox): id = classNames[classId - 1] #Send ID to HTML page print(id) if id=='drone': cvzone.cornerRect(img, box) cv2.putText(img, f'{classNames[classId - 1].upper()} {round(conf * 100, 2)}', (box[0] + 10, box[1] + 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 255, 0), 2) frame = img ret, buffer = cv2.imencode('.jpg', frame) frame = buffer.tobytes() cv2.imshow('',img) if cv2.waitKey(20)&amp;0xff==ord(' '): break yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') except: pass cv2.waitKey(1) else: break @app.route('/video_feed') def video_feed(): return Response(gen_frames(), mimetype='multipart/x-mixed-replace; boundary=frame') @app.route('/') def home(): &quot;&quot;&quot;Video streaming home page.&quot;&quot;&quot; return render_template('Home.html') if __name__ == '__main__': app.run(debug=True) </code></pre> <p>This is the webpage</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;title&gt;Home&lt;/title&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1, shrink-to-fit=no&quot;&gt; &lt;link href=&quot;https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css&quot; rel=&quot;stylesheet&quot;&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css&quot; integrity=&quot;sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO&quot; crossorigin=&quot;anonymous&quot;&gt; &lt;script src=&quot;http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.bundle.min.js&quot;&gt;&lt;/script&gt; &lt;style&gt; body { font-family: 'Gill Sans', 'Gill Sans MT', Calibri, 'Trebuchet MS', sans-serif; background-image: linear-gradient(to right, #2E3192, #1BFFFF); background-position: center; background-repeat: no-repeat; background-size: cover; position: relative; min-height: 100vh; display: flex; flex-flow: row; } .booth { flex: 1 1 auto; } .f2 { flex: 1 1 auto; display: flex; flex-flow: column-reverse; } .f2 .text { flex: 0 1 auto; } .f2 .frame { flex: 1 1 auto; position: relative; display: flex; flex-flow: column; } .f2 .frame #click-photo { flex: 0 1 auto; } .f2 .frame #canvas { flex: 1 1 auto; } #canvas { background-color: red; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;div class=&quot;booth&quot;&gt; &lt;!--video id=&quot;video&quot; width=&quot;100%&quot; height=&quot;100%&quot; autoplay&gt;&lt;/video--&gt; &lt;img id=&quot;video&quot; src=&quot;{{ url_for('video_feed') }}&quot; width=&quot;100%&quot;&gt; &lt;/div&gt; &lt;div class=&quot;f2&quot;&gt; &lt;div class=&quot;frame&quot;&gt; &lt;canvas id=&quot;canvas&quot;&gt;&lt;/canvas&gt; &lt;button id=&quot;click-photo&quot;&gt;Click Photo&lt;/button&gt; &lt;/div&gt; &lt;div class=&quot;text&quot;&gt; &lt;h2 id=&quot;status&quot; src=&quot;{{ url_for('obj')}}&quot;&gt;&lt;/h2&gt; &lt;!---Drone status---&gt; &lt;h2 id=&quot;cam&quot;&gt;&lt;/h2&gt; &lt;!--Camera location--&gt; &lt;/div&gt; &lt;/div&gt; &lt;script&gt; let video = document.querySelector(&quot;#video&quot;); let click_button = document.querySelector(&quot;#click-photo&quot;); let canvas = document.querySelector(&quot;#canvas&quot;); click_button.addEventListener('click', function () { canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height); let image_data_url = canvas.toDataURL('image/jpeg'); console.log(image_data_url); //document.getElementById(&quot;status&quot;).innerHTML = &quot; Drone: detected&quot;; document.getElementById(&quot;cam&quot;).innerHTML = &quot;Camera: 1&quot;; }); var stop = function () { var stream = video.srcObject; var tracks = stream.getTracks(); for (var i = 0; i &lt; tracks.length; i++) { var track = tracks[i]; track.stop(); } video.srcObject = null; } var start = function () { var video = document.getElementById('video'), vendorUrl = window.URL || window.webkitURL; if (navigator.mediaDevices.getUserMedia) { navigator.mediaDevices.getUserMedia({ video: true }) .then(function (stream) { video.srcObject = stream; }).catch(function (error) { console.log(&quot;Something went wrong!&quot;); }); } } $(function () { start(); }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>This is the ESP32 Camera code</p> <pre><code>#include &quot;esp_camera.h&quot; #include &lt;WiFi.h&gt; #include &quot;esp_timer.h&quot; #include &quot;img_converters.h&quot; #include &quot;Arduino.h&quot; #include &quot;fb_gfx.h&quot; #include &quot;soc/soc.h&quot; //disable brownout problems #include &quot;soc/rtc_cntl_reg.h&quot; //disable brownout problems #include &quot;esp_http_server.h&quot; #include &lt;ESP32Servo.h&gt; //Replace with your network credentials const char* ssid = &quot;Mukesh&quot;; const char* password = &quot;qwertyuiop&quot;; #define PART_BOUNDARY &quot;123456789000000000000987654321&quot; // This project was tested with the AI Thinker Model, M5STACK PSRAM Model and M5STACK WITHOUT PSRAM #define CAMERA_MODEL_AI_THINKER //#define CAMERA_MODEL_M5STACK_PSRAM //#define CAMERA_MODEL_M5STACK_WITHOUT_PSRAM // Not tested with this model //#define CAMERA_MODEL_WROVER_KIT #if defined(CAMERA_MODEL_WROVER_KIT) #define PWDN_GPIO_NUM -1 #define RESET_GPIO_NUM -1 #define XCLK_GPIO_NUM 21 #define SIOD_GPIO_NUM 26 #define SIOC_GPIO_NUM 27 #define Y9_GPIO_NUM 35 #define Y8_GPIO_NUM 34 #define Y7_GPIO_NUM 39 #define Y6_GPIO_NUM 36 #define Y5_GPIO_NUM 19 #define Y4_GPIO_NUM 18 #define Y3_GPIO_NUM 5 #define Y2_GPIO_NUM 4 #define VSYNC_GPIO_NUM 25 #define HREF_GPIO_NUM 23 #define PCLK_GPIO_NUM 22 #elif defined(CAMERA_MODEL_M5STACK_PSRAM) #define PWDN_GPIO_NUM -1 #define RESET_GPIO_NUM 15 #define XCLK_GPIO_NUM 27 #define SIOD_GPIO_NUM 25 #define SIOC_GPIO_NUM 23 #define Y9_GPIO_NUM 19 #define Y8_GPIO_NUM 36 #define Y7_GPIO_NUM 18 #define Y6_GPIO_NUM 39 #define Y5_GPIO_NUM 5 #define Y4_GPIO_NUM 34 #define Y3_GPIO_NUM 35 #define Y2_GPIO_NUM 32 #define VSYNC_GPIO_NUM 22 #define HREF_GPIO_NUM 26 #define PCLK_GPIO_NUM 21 #elif defined(CAMERA_MODEL_M5STACK_WITHOUT_PSRAM) #define PWDN_GPIO_NUM -1 #define RESET_GPIO_NUM 15 #define XCLK_GPIO_NUM 27 #define SIOD_GPIO_NUM 25 #define SIOC_GPIO_NUM 23 #define Y9_GPIO_NUM 19 #define Y8_GPIO_NUM 36 #define Y7_GPIO_NUM 18 #define Y6_GPIO_NUM 39 #define Y5_GPIO_NUM 5 #define Y4_GPIO_NUM 34 #define Y3_GPIO_NUM 35 #define Y2_GPIO_NUM 17 #define VSYNC_GPIO_NUM 22 #define HREF_GPIO_NUM 26 #define PCLK_GPIO_NUM 21 #elif defined(CAMERA_MODEL_AI_THINKER) #define PWDN_GPIO_NUM 32 #define RESET_GPIO_NUM -1 #define XCLK_GPIO_NUM 0 #define SIOD_GPIO_NUM 26 #define SIOC_GPIO_NUM 27 #define Y9_GPIO_NUM 35 #define Y8_GPIO_NUM 34 #define Y7_GPIO_NUM 39 #define Y6_GPIO_NUM 36 #define Y5_GPIO_NUM 21 #define Y4_GPIO_NUM 19 #define Y3_GPIO_NUM 18 #define Y2_GPIO_NUM 5 #define VSYNC_GPIO_NUM 25 #define HREF_GPIO_NUM 23 #define PCLK_GPIO_NUM 22 #else #error &quot;Camera model not selected&quot; #endif // #define Ser_1 14 // #define Ser_2 15 // #define ser_step 5 // Servo sn1; // Servo sn2; // Servo s1; // Servo s2; // int pos1=0; // int pos2=0; static const char* _STREAM_CONTENT_TYPE = &quot;multipart/x-mixed-replace;boundary=&quot; PART_BOUNDARY; static const char* _STREAM_BOUNDARY = &quot;\r\n--&quot; PART_BOUNDARY &quot;\r\n&quot;; static const char* _STREAM_PART = &quot;Content-Type: image/jpeg\r\nContent-Length: %u\r\n\r\n&quot;; httpd_handle_t stream_httpd = NULL; static esp_err_t stream_handler(httpd_req_t *req){ camera_fb_t * fb = NULL; esp_err_t res = ESP_OK; size_t _jpg_buf_len = 0; uint8_t * _jpg_buf = NULL; char * part_buf[64]; res = httpd_resp_set_type(req, _STREAM_CONTENT_TYPE); if(res != ESP_OK){ return res; } while(true){ fb = esp_camera_fb_get(); if (!fb) { Serial.println(&quot;Camera capture failed&quot;); res = ESP_FAIL; } else { if(fb-&gt;width &gt; 400){ if(fb-&gt;format != PIXFORMAT_JPEG){ bool jpeg_converted = frame2jpg(fb, 80, &amp;_jpg_buf, &amp;_jpg_buf_len); esp_camera_fb_return(fb); fb = NULL; if(!jpeg_converted){ Serial.println(&quot;JPEG compression failed&quot;); res = ESP_FAIL; } } else { _jpg_buf_len = fb-&gt;len; _jpg_buf = fb-&gt;buf; } } } if(res == ESP_OK){ size_t hlen = snprintf((char *)part_buf, 64, _STREAM_PART, _jpg_buf_len); res = httpd_resp_send_chunk(req, (const char *)part_buf, hlen); } if(res == ESP_OK){ res = httpd_resp_send_chunk(req, (const char *)_jpg_buf, _jpg_buf_len); } if(res == ESP_OK){ res = httpd_resp_send_chunk(req, _STREAM_BOUNDARY, strlen(_STREAM_BOUNDARY)); } if(fb){ esp_camera_fb_return(fb); fb = NULL; _jpg_buf = NULL; } else if(_jpg_buf){ free(_jpg_buf); _jpg_buf = NULL; } if(res != ESP_OK){ break; } //Serial.printf(&quot;MJPG: %uB\n&quot;,(uint32_t)(_jpg_buf_len)); } return res; } void startCameraServer(){ httpd_config_t config = HTTPD_DEFAULT_CONFIG(); config.server_port = 80; httpd_uri_t index_uri = { .uri = &quot;/&quot;, .method = HTTP_GET, .handler = stream_handler, .user_ctx = NULL }; //Serial.printf(&quot;Starting web server on port: '%d'\n&quot;, config.server_port); if (httpd_start(&amp;stream_httpd, &amp;config) == ESP_OK) { httpd_register_uri_handler(stream_httpd, &amp;index_uri); } } void setup() { WRITE_PERI_REG(RTC_CNTL_BROWN_OUT_REG, 0); //disable brownout detector // s1.setPeriodHertz(50); // s2.setPeriodHertz(50); // sn1.attach(2,1000,2000); // sn2.attach(13,1000,2000); // s1.attach(Ser_1,1000,2000); // s2.attach(Ser_2,1000,2000); // s1.write(pos1); // s2.write(pos2); Serial.begin(115200); Serial.setDebugOutput(false); camera_config_t config; config.ledc_channel = LEDC_CHANNEL_0; config.ledc_timer = LEDC_TIMER_0; config.pin_d0 = Y2_GPIO_NUM; config.pin_d1 = Y3_GPIO_NUM; config.pin_d2 = Y4_GPIO_NUM; config.pin_d3 = Y5_GPIO_NUM; config.pin_d4 = Y6_GPIO_NUM; config.pin_d5 = Y7_GPIO_NUM; config.pin_d6 = Y8_GPIO_NUM; config.pin_d7 = Y9_GPIO_NUM; config.pin_xclk = XCLK_GPIO_NUM; config.pin_pclk = PCLK_GPIO_NUM; config.pin_vsync = VSYNC_GPIO_NUM; config.pin_href = HREF_GPIO_NUM; config.pin_sscb_sda = SIOD_GPIO_NUM; config.pin_sscb_scl = SIOC_GPIO_NUM; config.pin_pwdn = PWDN_GPIO_NUM; config.pin_reset = RESET_GPIO_NUM; config.xclk_freq_hz = 20000000; config.pixel_format = PIXFORMAT_JPEG; if(psramFound()){ config.frame_size = FRAMESIZE_UXGA; config.jpeg_quality = 10; config.fb_count = 2; } else { config.frame_size = FRAMESIZE_SVGA; config.jpeg_quality = 12; config.fb_count = 1; } // Camera init esp_err_t err = esp_camera_init(&amp;config); if (err != ESP_OK) { Serial.printf(&quot;Camera init failed with error 0x%x&quot;, err); return; } // Wi-Fi connection WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(&quot;.&quot;); } Serial.println(&quot;&quot;); Serial.println(&quot;WiFi connected&quot;); Serial.print(&quot;Camera Stream Ready! Go to: http://&quot;); Serial.print(WiFi.localIP()); // Start streaming web server startCameraServer(); } void loop() { delay(1); } </code></pre> <p>Now my question is: How to control GPIO pins on ESP32 Camera, from the python script, so that when an object is detected, ESP32-Cam can activate an alarm system?</p>
2022-10-25T22:07:06.243
<p>The <code>on('connect',...)</code> callback will only ever be called <strong>once</strong> when the client connects, adding a new event listener for this event every time the get request is handled won't do anything useful and as the error states leaks &quot;connect&quot; event listeners.</p> <p>If you just want to publish a message on each request, remove the <code>on('connect',...)</code> wrapper and just call <code>client.publish(...)</code></p> <pre><code>app.get('/', function (req, res) { console.log(&quot;Hit Detected&quot;); res.write('&lt;h1&gt;Hello World&lt;/h1&gt;'); res.write(`&lt;h4&gt;v4&lt;h4&gt;`); client.publish(topic, '56 Deg.C', { qos: 0, retain: false }, (error) =&gt; { if (error) { console.error(error); } }) res.end() }); </code></pre>
|mqtt|nodejs|
MQTT Client: Listeners Exceeded Warning
6445
<p>I'm a newbie building an IOT system comprised of two clients, the first client (Client A) will be a Subscriber and the second client (Client B) will be a Publisher. For the Broker, I intend to use something hosted (e.g. Cedalo).</p> <p>Right now, I don't have Client B, but it will be some sort of H/W device that reads data from sensors and because Client B does not exist yet, I'm using Client A as both Subscriber and Publisher. Client A is running nodejs on a virtual machine and I have both express and mqtt packages installed.</p> <p>The express server is up and running, e.g. listening and responding to http requests and in the same index.js file as the express server is the mqtt code that is Publishing and Subscribing to a topic, this is also running, e.g. I can Publish and Subscribe, and data from the Subscribed topic is displayed in the console.</p> <p>Here is the working code:</p> <pre><code>let fs = require('fs'); let http = require('http'); let https = require('https'); let mqtt = require('mqtt'); let express = require('express'); let app = express(); let http_port = 8080; let host = `mqtt://test.mosquitto.org`; let mqttPort = `1883`; let clientId = `domain`; let connectUrl = `${host}:${mqttPort}`; let httpServer = http.createServer(app); let client = mqtt.connect(connectUrl, { clientId, clientId, clean: true, connectTimeout: 4000, username: `domain`, password: `secret`, reconnectPeriod: 1000, }); let topic = `ambientTemp`; client.on(`connect`, () =&gt; { console.log(`Connected to Broker at ${connectUrl}`); client.subscribe([topic], () =&gt; { console.log(`Subscribed to topic: ${topic}`); }); }); client.on(`error`, (err) =&gt; { console.log(`Error: ${err}`); }); client.on(`message`, (topic, payload) =&gt; { console.log(`Received Message:`, topic, payload.toString()) }) client.on('connect', () =&gt; { client.publish(topic, '56 Deg.C', { qos: 0, retain: false }, (error) =&gt; { if (error) { console.error(error); } }) }) // For http httpServer.listen(http_port, () =&gt; { console.log(`Listenting on Port:${http_port} Insecure`); }); // this is the home route app.get('/', function (req, res) { console.log(&quot;Hit Detected&quot;); res.write('&lt;h1&gt;Hello World&lt;/h1&gt;'); res.write(`&lt;h4&gt;v4&lt;h4&gt;`); res.end() }); </code></pre> <p>... here is the output from the browser and terminal: <a href="https://i.stack.imgur.com/r56gJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r56gJ.png" alt="enter image description here" /></a></p> <p>The problems start when I try to naively combine the &quot;publish step&quot; with the &quot;route step&quot; like this:</p> <pre><code>let fs = require('fs'); let http = require('http'); let https = require('https'); let mqtt = require('mqtt'); let express = require('express'); let app = express(); let http_port = 8080; let host = `mqtt://test.mosquitto.org`; let mqttPort = `1883`; let clientId = `domain`; let connectUrl = `${host}:${mqttPort}`; let httpServer = http.createServer(app); let client = mqtt.connect(connectUrl, { clientId, clientId, clean: true, connectTimeout: 4000, username: `domain`, password: `secret`, reconnectPeriod: 1000, }); let topic = `ambientTemp`; client.on(`connect`, () =&gt; { console.log(`Connected to Broker at ${connectUrl}`); client.subscribe([topic], () =&gt; { console.log(`Subscribed to topic: ${topic}`); }); }); client.on(`error`, (err) =&gt; { console.log(`Error: ${err}`); }); client.on(`message`, (topic, payload) =&gt; { console.log(`Received Message:`, topic, payload.toString()) }) client.on('connect', () =&gt; { client.publish(topic, '56 Deg.C', { qos: 0, retain: false }, (error) =&gt; { if (error) { console.error(error); } }) }) // For http httpServer.listen(http_port, () =&gt; { console.log(`Listening on Port:${http_port} Insecure`); }); // this is the home route app.get('/', function (req, res) { console.log(&quot;Hit Detected&quot;); res.write('&lt;h1&gt;Hello World&lt;/h1&gt;'); res.write(`&lt;h4&gt;v4&lt;h4&gt;`); client.on('connect', () =&gt; { //Additional code block causing issues client.publish(topic, '56 Deg.C', { //Additional code block causing issues qos: 0, //Additional code block causing issues retain: false }, (error) =&gt; { //Additional code block causing issues if (error) { //Additional code block causing issues console.error(error); //Additional code block causing issues } //Additional code block causing issues }) //Additional code block causing issues }) //Additional code block causing issues res.end() }); </code></pre> <p>.... I get this:</p> <p><a href="https://i.stack.imgur.com/UXhjY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UXhjY.png" alt="enter image description here" /></a></p> <p>It doesn't make sense to increase the maximum number of listeners because clearly there is something terribly wrong. This is what I think I know ... that node.js is asynchronous and event driven and that &quot;app.get&quot; is a callback function, as is &quot;client.on&quot; ... I'm struggling to understand why I can't seem to nest these callbacks. I'm missing something fundamental, I think.</p> <p>It's also not clear to me why the &quot;client.on&quot; callback that is nested within the route does not even get called once, i.e. I have put a console.log immediately prior to it and immediately after it and only the first message is displayed (the code doesn't show this, just to explain what I have tried).</p> <p>Any guidance would be appreciated.</p> <p>Thank you, Tim.</p>
2022-11-04T15:56:36.780
<p>You could try <strong>MQ135</strong> and tune it to detect the smoke. It has a potentiometer that changes its sensitivity to CO2. Move it around and tune it to trigger when certain smoke density is present. you could use it with esp8266 or esp32 as it's cheap, performant and resilient of course if you buy from a reliable source. that's your project there</p> <p>ESP8266/ESP32 MQ135 Some wires or PCB if you can 5v source / old phone charger and you are good to go you could find ready to use code example</p>
|smart-home|sensors|
Detect bad air quality because of burned firewood
6456
<p>In my neighbourhood people burn a lot of firewood. I always deactivate my automatic ventilation when I smell it, but I would like to have something better.</p> <p>Are there sensors to detect this?</p>
2022-11-07T15:22:03.740
<p>HiveMQ might be one of possible production ready MQTT brokers. And they also have a community edition but may have some limitations.</p>
|mqtt|security|esp8266|mosquitto|tls|
Mqtt in production
6460
<p>I am using Mqtt to build a project. I am wanting to move my whole work to prod ready. My hardware is ESP-07 and custom PCB design the broker is mosquitto the app (Nodejs,react native)</p> <ul> <li>I implemented Mqtt over TLS and used Let's encrypt certificate.</li> <li>I used smart-config and added a layer of encryption[DTLS] to it for secure password broadcast.</li> </ul> <p>I am struggling to make an infrastructure so that many of users can use my device on same broker without having access to other people devices. I know access control but I am asking if there an automation plugin</p> <p>Is there any other elements I should be aware of to launch my product to the market ?</p>
2022-11-08T18:20:53.097
<p>This sounds like your ISP is operating CGNAT (Carrier Grad Network Address Translation).</p> <p>Most home networks normally operate with NAT (Network Address Translation) which means that all the devices on the home network get given an IP address out of a RFC1918 address range (in this case 192.168.0.x) and then the router will remap this to the external IP address given your router by the ISP. This IP address may change over time (usually does unless you are paying for a static IP address), but if you know this external address you can setup port forwarding and access devices inside your network.</p> <p>Now when an ISP deploys CGNAT they no longer hand out a publicly routed IP address to your router, but they treat all the routers on their network like a home LAN and given them an address again out of the RFC1918 ranges (in this case a 10.x.x.x address, but they probably should be using 100.64.x.x-100.127.x.x) and then do translation to a much smaller range of public IP addresses at the edge of their network.</p> <p>This basically means that the ISP can VASTLY reduce the number of publicly routeable IP addresses they need (which are very expensive now the only way to get them is to buy them from existing owners rather than getting them assigned from RIPE).</p> <p>But it also means that you won't be able to use port forwarding, because it would need to be setup on both your router and the ISPs edge router (which isn't going to happen).</p> <p>Your choices are as follows:</p> <ol> <li>Move to a new ISP that isn't using CGNAT (this will get harder and harder to do)</li> <li>Use something like <a href="https://ngrok.com/" rel="nofollow noreferrer">ngrok</a> to setup an outbound tunnel that can be used to access servers</li> </ol>
|routers|ip-address|
Confused: What is this weird IP address?
6463
<p>So I've been looking into NAT traversing for my IoT project, and I'm a little confused. Here's how it goes:</p> <p>When I go out to ask google &quot;what is my public ip address?&quot;, I get an address like 46.114.190.96. This is, as far as I understand, my wifi router's IP address, since it serves as the public &quot;portal&quot; between my LAN and the public internet. Lets call this &quot;the public IP&quot;.</p> <p>The router also have a local IP address, 192.168.0.1, which I can use to log in to my router and do some settings. I'm setting up Virtual Server (for port forwarding, as I'd like to access my home server from outside the LAN). I've set it up so that it forwards traffic coming in to my local server, which for testing I hosted a simple web page. So far so good.</p> <p>Things get confusing when I actually try to access the server, from outside the home network. If I type in 46.114.190.96, the page doesn't load. I've set the ports and everything up correctly.</p> <p>BUT when I log into my wifi router, on the landing page it gives me another IP: 10.114.160.96. Lets call this one the &quot;router IP&quot;</p> <p>and when I type this address in, voila, the page loads.</p> <p>Even weirder, some of my friends typed the &quot;router IP&quot; in and can also access the page, and some other friends cannot (their browser just keeps trying to load until timeout). I've checked that the &quot;router IP&quot; is still the same. It does change once every now and then though.</p> <p>So my question is:</p> <ul> <li><p>What exactly is that &quot;router IP&quot;?</p> </li> <li><p>Why does it works for some of my friends, but doesn't for others?</p> </li> </ul>
2022-12-26T09:12:13.987
<p>Thanks @jsotola for patting my shoulder to talk to the manufacturer, who's also confirming (final confirmation going on) that this is malfunctioning.</p>
|sensors|
Aeotec Trisensor undefined behavior
6513
<p>I've been using a few of the device of the same model for several months. While they've been working great, one of them recently started showing strange behavior that I cannot find defined in the user manual. <a href="https://drive.google.com/file/d/1IkxghdSaz7BlhwBv1JMGbUHdR_ZVvgua/view?usp=sharing" rel="nofollow noreferrer">See the video I took and uploaded on my Google Drive</a>.</p> <ul> <li>It seems to me the light pattern is undefined in the manual (Some red and many white bli).</li> <li>Triggered factory reset (hold a button for 15 seconds) didn't seem to change the behavior.</li> <li>It used to be included in Z-Wave gateway (<code>Home Assistant</code> via <code>ZWaveJS</code>) and reporting well, but not anymore. The device doesn't even seem to trigger inclusion mode.</li> </ul> <p>Anyone has ideas of what's going on? I want to see if it's malfunctioning (so that I should ask the manufacturer's support).</p> <ul> <li><a href="https://aeotec.com/products/aeotec-tri-sensor/" rel="nofollow noreferrer">Product official page (aeotec.com)</a></li> <li><a href="https://aeotec.freshdesk.com/support/solutions/articles/6000195459-trisensor-user-guide-" rel="nofollow noreferrer">User manual (aeotec.freshdesk.com, linked from the product official page)</a></li> </ul>
2023-01-16T00:30:26.300
<p>The issue is resolved after I set the network adapter of the virtual machine to <strong>bridged</strong>.</p>
|networking|esp32|ip-address|
Failed to connect to TCP server on ESP32
6537
<p>I wrote a TCP client in C and a TCP server in Python. The client runs on a ESP32S2 board while the server runs on my PC (virtual Linux OS) and both the board and PC are connected to the same Wi-Fi. However, even though the same client code works as expected on my PC, it is not working when the code is loaded into the ESP32S2. The <code>connect()</code> function returns <strong>errno 113</strong>. I was wondering what could be the underlying issues.</p> <p>Here is the client code (code handling Wi-Fi connection is omitted for simplicity):</p> <pre class="lang-c prettyprint-override"><code>#define SERVER_IP AF_INET #define SERVER_ADDR &quot;192.168.1.157&quot; #define SERVER_PORT 5566 static int client_fd; static void client_init(struct sockaddr_in addr) { client_fd = socket(SERVER_IP, SOCK_STREAM, 0); addr.sin_family = SERVER_IP; addr.sin_port = htons(SERVER_PORT); if (inet_pton(SERVER_IP, SERVER_ADDR, &amp;(addr.sin_addr)) &lt; 0) { ESP_LOGE(TAG, &quot;Invalid address or protocol (errno: %d)&quot;, errno); } bzero(&amp;(addr.sin_zero), 8); if (connect(client_fd, (struct sockaddr *)(&amp;addr), sizeof(addr)) == -1) { ESP_LOGE(TAG, &quot;connect error (errno: %d)&quot;, errno); } } void app_main() { wifi_sta_init(); struct sockaddr_in server_addr; client_init(server_addr); } </code></pre> <p>Here is the server code:</p> <pre class="lang-python prettyprint-override"><code>import socket import sys server_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_addr = ('0.0.0.0', 5566) server_sock.bind(server_addr) server_sock.listen(1) while True: print(&quot;waiting for a connection&quot;) connection, client_addr = server_sock.accept() try: print(f&quot;connection from {client_addr}&quot;) while True: data = connection.recv(1024) with open('test.txt', 'wb') as file: file.write(data) print(f&quot;received {data}&quot;) if data: break else: print(&quot;no more data from {client_address}&quot;) break finally: connection.close() </code></pre> <p>Update: The issue is resolved after I set the network adapter setting of the virtual machine to <strong>bridged</strong>.</p>
2023-01-21T02:16:35.333
<p>An idle connection will use very, very little power. Just need to handle a few keep alive packets now and then, and possible slightly larger CPU overhead to correctly map incoming packets to the right connection (both at the OS network stack level and in your app), but that should be negligible.</p> <p>Keeping tens of thousands or hundreds of thousands of connections active on a single server is not a problem if you use the right tools (I.e. you don’t have an Apache proxy on the path, for instance).</p> <p>Establishing and HTTPS connection, on the other hand, is extremely expensive, both in terms of CPU and network traffic. You first need to establish a TCP connection, then TLS inside that, then HTTP inside that, probably also adding authentication and any application-level handshake on top of that.</p> <p>That’s the whole reason there are so many mechanisms to avoid doing it again (connection keep alive, to serve several HTTP requests within a single TCP+TLS connection), or to speed up new negotiations (with caching, but that is limited in time).</p> <p>Also, keeping the connection established has one huge advantage: you can send data from server to client at any time, while with individual connections you need to either perform polling (nooooooo) or long polling to do that, which gives the same result as keeping the connection alive, but with higher overhead.</p> <p>Unless there’s something specific in your setup that would prevent it, go for the permanent connections.</p>
|mqtt|data-transfer|https|cloud-computing|
compute power for MQTT vs HTTP POST in cloud
6541
<p>Trying to get an understanding of the cloud side compute power comparison (and cost) for say 10,000 devices sending data to the cloud using <code>HTTPS</code> Vs <code>MQTT</code>.</p> <p>My co-worker says keeping 10k <code>MQTT</code> connections alive would eclipse the cost of having the same devices <code>HTTPS POST</code>. My intuition says it's probably the same (since you'll always need the power to support all connections even when using HTTPS), and the overhead of building those connections every time.</p> <p>Does anyone have any experience comparing? Or would anyone have anecdotal experience?</p> <p>Appreciate the help!</p>
2023-01-25T20:20:05.963
<p>I've not actually added one, but the Home app implies so.</p> <p>Both Chromecasts listed are the new HD with Google TV models and the old key hole shaped model is not listed as an option</p> <p><a href="https://i.stack.imgur.com/r7B7p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r7B7p.png" alt="enter image description here" /></a></p>
|chromecast|
Can a 4th Gen Chromecast ("Chromecast with Google TV") work in a group with Chromecast Audio?
6545
<p>I have five Chromecast Audio devices which produce 3.5mm/optical audio output from one source. I also have a 1st gen Chrome TV with HDMI audio [<em>the &quot;keyhole&quot; shape</em>] which cannot play audio streams shared with Chromecast audio. It will support Spotify, but not as part of the five Chomecast Audio devices in an audio group.</p> <p>Can a 4th generation Chomecast (product name &quot;Chromecast with Google TV&quot;, either HD or 4K) play a Chromecast audio stream shared with Chromecast Audio devices?</p>
2023-02-18T20:27:08.323
<p>The work the good people around openhab do seem to be the way to go when using the hub as Jcaron already mentioned.</p> <p>Weirdly the manual doesn't say which wireless technology it uses, except &quot;Proprietary Wireless Control: 2425 MHz - 2480 MHz&quot;. So if you want to forgo the hub you'll need to do some radio sleuthing which likely won't be a short project. As a sidenote, those frequencies go into the range where the you're amplitude limited in the US. So unless you're an expert in radio communications this is unlikely to be a great starter project for IoT hacking.</p> <p>Either way, if you don't care about using the hub and you just don't want the app openHab is probably the best way to go. Probably with this <a href="https://community.openhab.org/t/new-sure-petcare-binding-for-cat-pet-flap/82257" rel="nofollow noreferrer">project</a>.</p>
|wifi|mobile-applications|
Hacking wifi control of pet door
6574
<p>I'm considering buying the <a href="https://www.surepetcare.com/en-us/pet-doors/microchip-cat-flap-connect" rel="nofollow noreferrer">Sure Petcare Microchip Cat Flap Connect</a>.</p> <p><a href="https://i.stack.imgur.com/5SpC5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5SpC5.png" alt="pet door photo" /></a><a href="https://i.stack.imgur.com/o030J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o030J.png" alt="pet door illustration" /></a></p> <p>I like the idea of being able to set a schedule for when my cat is allowed to exit the house and also to control it via the internet when I want to change the schedule temporarily.</p> <p>It says &quot;You’ll need [to also buy the $85] Hub to use the <a href="https://play.google.com/store/apps/details?id=com.sureflap.surepetcare&amp;hl=en_US" rel="nofollow noreferrer">Sure Petcare App</a>.&quot;</p> <p>The app gets terrible reviews. So I likely don't want to use it if I can avoid it.</p> <p>I'm a software engineer and am new to IOT hacking.</p> <p>I'm curious whether there would be some way to buy just the cat flap (and not the hub) and write my own software for controlling it (if the cat flap could connect to my wifi network somehow).</p> <p>Where could I learn how to do this?</p>
2023-02-23T17:57:05.760
<p>Yes it does.</p> <p><a href="https://forum.armbian.com/topic/26913-does-orange-pi-5-have-ptp-hardware-timestamping-support/#comment-160778" rel="nofollow noreferrer">https://forum.armbian.com/topic/26913-does-orange-pi-5-have-ptp-hardware-timestamping-support/#comment-160778</a></p> <p>A kind user on the armbian forum, @royk, ran the ethtool command, and posted the output which shows:</p> <pre><code>Time stamping parameters for eth0: Capabilities: hardware-transmit software-transmit hardware-receive software-receive software-system-clock hardware-raw-clock PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off on Hardware Receive Filter Modes: none all ptpv1-l4-event ptpv1-l4-sync ptpv1-l4-delay-req ptpv2-l4-event ptpv2-l4-sync ptpv2-l4-delay-req ptpv2-event ptpv2-sync ptpv2-delay-req </code></pre>
|ethernet|
Does Orange Pi 5 have PTP hardware timestamping support?
6577
<p>I'm looking for information about the Ethernet chip on the Orange Pi 5. I need an SBC that has PTP hardware timestamp support (IEEE 1588). According to OrangePi.org, the Orange Pi 5 has the Motorcomm YT8531C, but I have not been able to find documentation that provides the answer about hardware PTP support. Most likely it does not, but I'd like to confirm on an actual device before purchasing one.</p> <p>If you have an Orange Pi 5, you can test this with this command:</p> <p><code>ls /sys/class/net |xargs -n1 ethtool -T</code></p> <p>If there is someone that can post the output of this command, I would appreciate it very much. Sorry for the odd question, this is the best StackExchange site I could find to ask this question. I would ask on the OrangePi forum, but it doesn't run HTTPS, so I'd rather not create an account there.</p>
2023-03-18T17:09:46.893
<p>To control the specific functions you mentioned, you may need to add custom commands or routines to your Google Home app. Here are some general steps you can take to create custom voice commands:</p> <ol> <li>Open the Google Home app on your phone or tablet.</li> <li>Tap on the &quot;+&quot; sign at the top left corner of the screen to create a new routine.</li> <li>Give your routine a name that you'll remember, like &quot;Watch Netflix.&quot;</li> <li>Under &quot;When,&quot; select &quot;Voice Command&quot; and enter the phrase you want to use to activate the routine, like &quot;Watch Netflix.&quot;</li> <li>Under &quot;Actions,&quot; select &quot;Add Action&quot; and choose &quot;Smart Home.&quot;</li> <li>Select the device you want to control, like the MoesGo Smart IR Unit, and choose the function you want to perform, like &quot;Select HDMI1&quot; or &quot;Press Netflix Button.&quot;</li> <li>Repeat step 6 for any additional functions you want to include in the routine.</li> <li>Tap &quot;Save&quot; to save your routine.</li> </ol> <p>Once you've created your routine, you should be able to activate it by saying the voice command you chose, like &quot;Watch Netflix.&quot; Google Home should then send the appropriate commands to the MoesGo Smart IR Unit to perform the functions you specified in the routine.</p> <p>If you don't see the specific functions you want to control in the Google Home app, you may need to consult the documentation for the MoesGo Smart IR Unit or contact their customer support for assistance in adding these functions to your device.</p>
|smart-home|google-home|smart-tv|
How to Control a TV via Google-Home and Smart Life
6592
<p>I've set up a MoesGo Smart IR unit with Smart Life for a friend to control a TV. I can now use the virtual remote control on my phone to control the TV which is quite handy when he looses the physical remote.</p> <p>The Smart Life account was integrated with Google Home a while ago so that all the smart appliances he has can be voice controlled. Now that we have a Smart IR Unit we can use voice commands to:</p> <ul> <li>Turn the TV On and Off</li> <li>Change channel</li> <li>Raise and lower the volume</li> <li>Mute and Unmute sound</li> </ul> <p>This was just a matter of guessing the commands, and to be honest they were fairly obvious. The problem we've got is that there a lot more we'd like to do with the TV but we are unable to guess the right commands! For a start we've like to:</p> <ul> <li>Select the video source (TV, HDMI1, HDMI2 etc)</li> <li>Press the Netflix button</li> <li>Navigate Netflix (we are happy to say &quot;Press the up Button&quot; if that will work)</li> </ul> <p>We've tried everything we can think of but Google Home just says that it doesn't understand. I did find a web site that suggested saying <em>&quot;Talk to TV&quot;</em>, and Google does appear to understand this is a valid command, but the action it takes is to turn the TV off! (I suspect it's toggling the power button)</p> <p>Does anybody know what commands we can use?</p>
2023-03-23T16:21:29.340
<p>Yes, they still need to use &quot;IoT Protocols&quot;</p> <p>Communication between the vendors cloud services and a Hub (E.g. an Amazon Echo or a Google/Nest Hub) device still uses things like MQTT, it then maps this to a Matter/Zigbee/Thread message to any individual devices.</p>
|smart-home|mqtt|protocols|cloud-computing|matter|
How do I access my Matter devices when I’m not at home?
6597
<p>How do device developers and major platform providers in the world such as Google Home, Apple, Tuya, and Aqara do to control Matter devices remotely??? Do they still use traditional IoT protocols like MQTT?</p> <p>I tried searching but didn't have much information. Please help me with this question.</p>
2023-04-07T21:06:33.037
<p>Yes, sufficient for a 7B model.</p> <p>You may refer to this link for details: <a href="https://nvidia-ai-iot.github.io/jetson-generative-ai-playground/tutorial_text-generation.html" rel="nofollow noreferrer">https://nvidia-ai-iot.github.io/jetson-generative-ai-playground/tutorial_text-generation.html</a></p> <p><a href="https://i.stack.imgur.com/Oy89n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oy89n.png" alt="enter image description here" /></a></p>
|machine-learning|
Do Jetson Orin Nano-like systems with 8GB RAM suffice to experiment with large language models?
6607
<p>I would like to experiment with a platform that can run a virtual assistant (think e.g. <a href="https://mycroft.ai/" rel="nofollow noreferrer">https://mycroft.ai/</a>) so to perform tasks like speech-to-text, conversations, text-to-speech, image segmentation and object detection (e.g. when fed with an RTSP stream).</p> <p>I'm also curious about performing inference with large language models (LLMs), like those from Facebook Research's LLaMA (<a href="https://github.com/facebookresearch/llama" rel="nofollow noreferrer">https://github.com/facebookresearch/llama</a>).</p> <p>Would a platform like Nvdia Jetson Orin Nano (8GB RAM) support LLMs inference, or would it still be underpowered, e.g. memory-constrained? What would be the specs I should aim for? I would use mainly pytorch as underlying framework.</p>
2023-04-07T21:45:14.760
<p>I was facing the same issue, trying various Micropython versions with my ESP32 WROOM chip.</p> <p>Micropython's download page for the <a href="https://micropython.org/download/ESP32_GENERIC_S3/" rel="nofollow noreferrer">appropriate ESP Firmware</a> makes it clear <strong>they support SPIRAM / PSRAM</strong>:</p> <blockquote> <p>This firmware supports configurations with and without SPIRAM (also known as PSRAM) and will auto-detect a connected SPIRAM chip at startup and allocate the MicroPython heap accordingly. However if your board has Octal SPIRAM, then use the &quot;spiram-oct&quot; variant.</p> </blockquote> <p>Only after looking at the <a href="https://www.espressif.com/sites/default/files/documentation/esp32-s3-wroom-1_wroom-1u_datasheet_en.pdf" rel="nofollow noreferrer">ESP32-S3-WROOM Datasheet</a>, I realized that <strong>there are packages without PSRAM</strong>, and that the one sitting on my desk was one of them (<code>ESP-S3-WROOM-1-N16</code>).</p> <p><a href="https://i.stack.imgur.com/pskGm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pskGm.png" alt="ESP32-S3-WROOM-1 Series Comparison" /></a></p> <p>After erasing and re-flashing the Firmware, I am now able to use Micropython on the ESP32 with the 512KB of internal RAM (the warnings still appear, however).</p>
|esp32|micropython|
Unable to run micropython on ESP32-WROOM
6608
<p>I just took delivery of a trio of ESP32-WROOM-32 dev boards and am unable to get micropython to run on them. I've successfully flashed sketches from the Arduino IDE, but after I flash the latest esp32spiram.bin and try to connect with Thonny, I see an endless loop of</p> <pre><code>ELF file SHA256: 46bca36b7d6020a6 Rebooting... ets Jul 29 2019 12:21:46 rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) configsip: 0, SPIWP:0xee clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 mode:DIO, clock div:2 load:0x3fff0030,len:4540 ho 0 tail 12 room 4 load:0x40078000,len:12788 load:0x40080400,len:4176 entry 0x40080680 E (650) psram: PSRAM ID read error: 0xffffffff E (651) spiram: SPI RAM enabled but initialization failed. Bailing out. E (652) spiram: SPI RAM not initialized Re-enable cpu cache. abort() was called at PC 0x400d3ea6 on core 0 Backtrace:0x400964a9:0x3ffe3b60 0x40096a99:0x3ffe3b80 0x4009a355:0x3ffe3ba0 0x400d3ea6:0x3ffe3c10 0x400d3ef7:0x3ffe3c30 0x400825ee:0x3ffe3c50 0x40081e1b:0x3ffe3c70 0x40078f5c:0x3ffe3c90 |&lt;-CORRUPTED </code></pre> <p>Can anyone suggest what I might be doing wrong?</p>
2023-04-16T14:44:52.810
<p>As of MicroPython 1.20.0 (2023-04-26) the above code works correctly.</p>
|wifi|esp32|micropython|
Can't Connect to ESP32 in AP Mode
6616
<p>I'm trying to connect to an ESP32-WROOM-32 in AP mode, using the following micropython code:</p> <pre><code>import network ssidAP = 'WiFi_ESP32' #Enter the router name passwordAP = '12345678' #Enter the router password local_IP = '192.168.1.10' gateway = '192.168.1.1' subnet = '255.255.255.0' dns = '8.8.8.8' ap_if = network.WLAN(network.AP_IF) def AP_Setup(ssidAP,passwordAP): ap_if.ifconfig([local_IP,gateway,subnet,dns]) print(&quot;Setting soft-AP ... &quot;) ap_if.active(True) ap_if.config(essid=ssidAP,authmode=network.AUTH_WPA_WPA2_PSK, password=passwordAP) print('Success, IP address:', ap_if.ifconfig()) print(&quot;Setup End\n&quot;) try: AP_Setup(ssidAP,passwordAP) except: print(&quot;Failed, please disconnect the power and restart the operation.&quot;) ap_if.disconnect() </code></pre> <p>This code is a direct copy from the FreeNove ESP32 tutorial.</p> <p>The device shows up on my phone as a WiFi AP, but when I try to connect, my phone spins for a minute and then reports &quot;Couldn't obtain IP address&quot;. I've tried this with two different phones (both Android) and two different boards (the other is an ESP32-WROVER-E), and I've also confirmed that it <em>does</em> work when running the equivalent Arduino (C++) sketch.</p> <p>What am I doing wrong? I'm running MicroPython v1.19.1 firmware.</p>
2023-05-05T07:53:03.690
<p>If anyone is facing the same problem, here's what I did to make it work:</p> <ul> <li>create an application on The Things Stack Open Source (available by clicking on the link of the built-in LoraWAN server)</li> <li>add an end device</li> <li>change the Dragino GW's primary LoRaWAN server to <code>Local Host/Built-in Server</code> and disable the secondary one</li> </ul> <p>And it worked. The reason why there was no Join Accept is because the end device had the wrong keys (AppKey, DEVEUI, etc), and the primary LoRaWAN server was not correctly configured.</p> <p>Check this post : <a href="https://iot.stackexchange.com/q/6034/19652">#6034 LoRaWAN device not receiving Join Accept</a></p> <p>Hope this can help!</p>
|lorawan|
Gateway receives Join Request from end device, but it doesn't accept
6630
<p>I'm quite new to IoT in general.</p> <p>I have a Dragino LPS8v2 gateway, and I would like to create a private LoRaWAN network for my end device (TTGO T-Beam).</p> <p><strong>What I did :</strong></p> <ul> <li>configure GW's LoRa parameters (frequency plan)</li> <li>configure built-in LoRaWAN server (semtech UDP). As mentioned in the Dragino documentation, I selected as primary LoRaWAN server <code>The Things Network V3</code>, and as second server <code>Local Host/Built-in Server</code></li> </ul> <p><a href="https://i.stack.imgur.com/hyrn9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hyrn9.png" alt="Gateway's LoRaWAN setup" /></a></p> <ul> <li>Register gateway in The Things Stack Open Source by clicking the link in the GW's configuration page (System -&gt; Built-in Server)</li> </ul> <p><a href="https://i.stack.imgur.com/xGeoD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xGeoD.png" alt="Link to The Things Stack Open Source" /></a></p> <p><strong>What I expect :</strong></p> <p>I expect my end device to connect to the Dragino gateway, and therefore use the built-in LoRaWAN server.</p> <p><strong>What is happening :</strong></p> <ul> <li>GW seems ok (logs show nothing special)</li> <li>end device sends Join Request to gateway, but connects to another near gateway (which is connected to TTN).</li> </ul> <p>For context, the Dragino GW is next to the end device (&lt;1 meter, because I'm trying to set it up and debugging). The other GW is 5-10 meters away.</p> <p>Also, when checking the Dragino's traffic, the <code>DEVEUI</code> seen is the one of my end device, used for an application on The Things Network.</p> <p>I have a few questions:</p> <ol> <li>Why is the end device not connecting to the Dragino GW ? Is it because it is too near? Or because the other GW responded faster?</li> <li>Is it correct to choose as primary LoRaWAN server the <code>The Things Network V3</code>, even though I want to use the built-in LoRaWAN server? And if so, why?</li> <li>When using a built-in LoRaWAN server, the GW shouldn't be connected to The Things Network, is that right?</li> <li>Should I add an application in The Things Stack, as well as an end device? I already created an app and registered the end device on The Things Network...</li> </ol> <p>Thanks in advance.</p>
2023-06-24T14:11:41.527
<p>Most definitely not a complete answer as we lack a lot of info and this is going to be a lot of trial and error to attempt to get something, but a few pointers:</p> <ul> <li>iOS apps use BLE (Bluetooth Low Energy) rather than classic Bluetooth.</li> <li>There are many BLE scanner apps available both in iOS and macOS. You should be able to find the device being advertised. In some you can even connect to the device. With some luck this is enough to enable WiFi.</li> <li>You can use <a href="https://developer.apple.com/documentation/corebluetooth" rel="nofollow noreferrer">Core Bluetooth</a> to communicate with the device from an app on your Mac</li> <li>This will most likely involve scanning for devices, then connecting to the device, then possibly exchanging some data to control the camera and activate WiFi. As stated above, if you are lucky enough, just connecting to the device at the BLE level will be enough. If not then it’s going to be a lot harder because you need to find out what should be exchanged.</li> <li>Once connected you can enumerate the services and characteristics of the device. With some more luck, some of them will be standard and give a hint of what is possible.</li> <li><a href="https://github.com/mickeyl/core-bluetooth-tool" rel="nofollow noreferrer">This</a> is one example of a simple CLI app using Core Bluetooth.</li> <li>If connecting is not enough and there are no standard services, you could try to make your Mac simulate the camera: you’ll need to switch the role of your Mac (run as a “Peripheral” rather than a “Central”), advertise the same data, services and characteristics as the camera, and see what the app sends.</li> <li>If it’s more then just “set characteristic X of service Y to value Z” to enable WiFi, and there is an actual exchange, then you would probably need to run as both Central (with the camera as a Peripheral) and Peripheral (with the phone as Central) and relay messages between them, but this will probably require the phone not to be able to see the camera…</li> </ul> <p>Are you sure the “hand-off” was from the device? My guess is that it’s more likely to be from your phone (and the phone got it from the app, there are APIs to automatically connect to WiFi networks).</p> <p>As for the thumbnails, either they are somewhere in the file tree as well or there is some webservice to get them (as well as a list of videos etc.). If nothing seems to be available directly in the file hierarchy, what you could try is to have your Mac between the device and your phone while it is connecting. The details will vary depending on a number of details (including whether it uses an IP or a domain name, whether it uses TLS and actually checks certificates or not…).</p> <p>It’s going to be a bit convoluted but you could share the connection to the device with another network (you’ll probably need to share to Ethernet and then have an AP for the iPhone to connect to), and start by observing using Wireshark what the phone sends (note that you’ll have to somehow separate what is actually from the app from the rest).</p> <p>One other thing you could look into is to activate developer mode on your phone, then use the Console app (not Terminal, Console) to view the logs on your phone. Sometimes apps log lots of useful information.</p>
|networking|wifi|
How to inspect trail camera network
6655
<p>I was recently given a trail camera, &quot;wildlife camera.&quot; It works great, but the mobile app to connect and see the pictures/videos is very janky and hard to use. I write software for a living, so building something that can read files from a server is the easy part. What I'm trying to figure out is how to connect to the camera wirelessly. (I mostly figured that out while writing this post)</p> <p>Here's what I can discern so far.</p> <ol> <li>The app appears to connect to bluetooth first.</li> <li>It then prompts &quot;Join network TC08-....&quot;</li> <li>I tried joining the wireless network directly from MacOS while connected to the app, it prompted me with the &quot;hand-off&quot; feature where you can share a password across devices. When I went and checked what password was shared it was, wait for it, 12345678.</li> <li>The network is only allows one device at a time (is that a thing?), because if I get the computer connected, the ios app can't connect.</li> <li>The wifi network only boots up when I connect to bluetooth, I assume this is a power-saving feature. I don't see the camera on my list of available bluetooth apps in MacOS.</li> <li>Once connected I used <code>nmap</code> to find the open ip/ports. Here's the list: 80, 443, 3333, 8192, 8193.</li> <li>I can navigate through the browser to the ip, and get download links for all the photos and videos, it's basically a very simple webpage: <code>/DCIM/MOVIE</code> is where the videos are stored.</li> </ol> <p>So this is very close to my goal, however, I don't want to download each video file. The app has access to thumbnails of the video files, but I'm not sure where they are stored. I wonder if I could connect the two devices if I could use wireshark to view what paths are requested from the app?</p> <p>The two parts I'd like to nail down:</p> <ol> <li>connect on bluetooth from the computer, so I can trigger the wifi to start.</li> <li>find the thumbnails so I don't have to download each video file.</li> </ol>
2023-07-11T02:38:59.303
<p>This is a very broad question with lots of possible answers, but here are three possible approaches:</p> <ul> <li><p>Pure software: an app or a website they can use on their mobile.</p> </li> <li><p>Hardware solution, direct link to your home: given the distance and very likely obstacles, this is probably not possible using WiFi and even less with BLE. You would need a device using a protocol that supports longer distances like LoRa, but even then, obstacles on the way can radically change the feasibility. If you want to go down this road, there are plenty of boards with an ESP32 and LoRa radio. Add a battery, a decent case with a button, and write some simple software that will put the device in deep sleep, set to wake up on button press, on which case it will send a message over LoRa and go back to sleep. You’ll need a device at your home to receive those messages and act upon them. Watch out for packet loss.</p> </li> <li><p>Hardware, using local network: the easiest option is probably to connect to the local Wifi. Other than that, the approach is the same as above in terms of the device (but there are even more ESP32 boards without LoRa), and here you need a server reachable from the Internet to receive the messages.</p> </li> </ul> <p>There are other alternatives involving a board with a cellular modem instead, and some would probably suggest WiFi HaLow or other radio protocols.</p> <p>There are of course a lot of details and caveats in each option, but this should get you started until you come up with more specific questions.</p>
|protocols|
How to build a roll call IoT app?
6664
<p>I've brainstormed an idea for an IoT hobby project but don't know exactly where to start in terms of the implementation. Here's the idea. 10 of my friends who live &lt;.5 mile radius of my house like to get together often. I want to give all my friends each a small clicker ( or the like) to report to me whether they will be able to attend an outing. So for instance, on Sunday morning each friend will 'click' to inform me whether they will come later that afternoon. What is the most appropriate protocol to use? What type of hardware will I need?</p>
2023-07-14T19:41:47.000
<p>In TLS crypto there are a few different objects which are related to each other.</p> <p>The two basic objects are the private and public keys, which are generated at the same time, and are linked to each other (the private key allows one to decipher data encrypted with the public key, the public key allows one to verify a signature generated with the private key).</p> <p>The private key should remain secret, and it is best if it actually never moves away from the device where it was generated. The public key is public and can be shared with anyone.</p> <p>So the best course of operation is to generate the private/public key pair on the device where the private key will be used, and then only communicate the public key and objects derived from it.</p> <p>The main type of object derived from it is a certificate. A certificate contains the public key, but also identifying information, and it is signed by a certification authority (CA) which vouches for that identifying information.</p> <p>To get a certificate from the CA, one creates a “certificate signing request” (CSR). One needs the private/public key pair to generate it, and adds the identifying info into the CSR (but since it has not been signed by the CA, that information is not trusted yet).</p> <p>So the “right” way to do things is as follows:</p> <ul> <li>generate private/public key pair on device</li> <li>Generate CSR on device based on the above + identification</li> <li>Send CSR to CA</li> <li>CA verifies somehow that whoever sent the CSR is who they say they are</li> <li>CA created a certificate using the info from the CSR and signs it with their own private key</li> <li>CA returns the certificate</li> <li>Now device has private key and signed certificate.</li> </ul> <p>This is what CreateCertificateFromCsr lets you do. For this you need to generate the private/public key pair, generate the CSR (with the right data), send the CSR using this call, and you get back the certificate.</p> <p>Now some people are in comfortable doing all this, and they prefer something simpler. And that’s where CreateKeysAndCertificate comes in. You don’t need any input, they will do everything for you: generate the key pair, the CSR, sign it. In the end they will return the private key and certificate, which is all you need.</p> <p>The difference is that the private key will have been generated on their side and is transmitted over the network, which means someone, either at their end or in between, could try to intercept it, and then act as if they were you. In this scenario this is unlikely, so you can probably save yourself the hassle and use the second one.</p>
|aws-iot|aws|
What is the difference between using CreateCertificateFromCsr and CreateKeysAndCertificate?
6670
<p>When doing fleet registration of IoT devices on AWS' IoT, what is the difference between using <a href="https://docs.aws.amazon.com/iot/latest/developerguide/fleet-provision-api.html" rel="nofollow noreferrer">CreateCertificateFromCsr and CreateKeysAndCertificate</a>? They seem similar enough to where <em>both</em> are not needed, so why does AWS provide both?</p>
2023-08-27T02:32:17.273
<p>Your answer lies in going the IFTTT route to control your device. You can then control it from Alexa as well as other sources of control such as your javascript file.<br /> Here are some Alexa IFTTT integrations to get you started.<br /> <a href="https://ifttt.com/amazon_alexa" rel="nofollow noreferrer">https://ifttt.com/amazon_alexa</a></p>
|raspberry-pi|alexa|nodejs|
Send Alexa a command from another device?
6700
<p>I'm in the process of creating a Raspberry Pi-powered home automation system. I have a complicated node.js script running on the Pi, and I want to be able to send commands from the Pi to my Alexa (e.g. turn on lights, say something, etc). I've looked into AVS (Amazon Voice Service) but I'd have to convert text to an audio command for that, and I was wondering if there's a simpler way. Does anybody have experience in this type of thing?</p>
2023-08-30T16:29:54.840
<p>LoRa is the &quot;raw&quot; radio modulation. You create a packet, it sends it as is, with only the LoRa modulation, no additional data (addresses, encryption, etc.).</p> <p>Most people will use LoRaWAN rather than just raw LoRa. LoRaWAN uses the LoRa modulation, but adds quite a few things on top of that, including addressing, encryption, optional acknowledgments and confirmed packets, join requests, handling on multiple channels and data rates (including changes to those), etc.</p> <p>LoRaWAN however relies on a &quot;hub and spoke&quot; or &quot;star&quot; topology: there's a gateway (or there could be several), and devices talk to the gateway, but not to each other. Gateways are connected to an LNS (LoRa Network Server), which in turn talks to servers on the Internet (usually the goal of LoRa devices is to report data to a server somewhere on the Internet, and sometimes receive a little bit of information like configuration from the same).</p> <p>LoRaWAN gateways are more complex than end-devices because they have to listen on multiple channels and multiple data rates at the same time, and then need to communicate with an LNS.</p> <p>One option if you are covered is to use an existing LoRaWAN network (the most ubiquitous is The Things Network, but there are others). Then you register your LoRaWAN end device and your application server on that network, and you can set up your device to join the network and then whatever it sends should end up on your server. Note that you will use different APIs, as you need a LoRaWAN implementation, not just raw LoRa.</p> <p>Alternatively, you can have your own gateway, and register it on an existing network. Or have you own gateway and your own LNS.</p> <p>If you don't want LoRaWAN for whatever reason but want to stick to pure device-to-device communication and you need more, then you will need to implement those additional features yourself.</p> <p>There have also been implementation of mesh protocols over LoRa, though I'm not familiar with the details.</p>
|arduino|lora|lorawan|
Arduino LoRa communication protocol
6702
<p>I am fairly new to IoT and to all kinds of Arduino chips/sensors etc.. I am trying to understand the mechanism of <code>LoRa</code> device communication. With the help of the official Arduino web page I found the following code:</p> <pre><code>void setup() { Serial.begin(9600); while(!Serial); Serial.println(&quot;LoRa Sender&quot;); if (!LoRa.begin(868E6)) { Serial.println(&quot;Starting LoRa failed!&quot;); while (1); } } </code></pre> <p>As far as I understand <code>LoRa.begin(868E6)</code> sets the frequency of transmission of the device. The code continues with the <code>loop</code> function:</p> <pre><code>void loop() { Serial.print(&quot;Sending packet: &quot;); Serial.println(counter); // send the data to all // possible consumers ? LoRa.beginPacket(); LoRa.print(&quot;hello &quot;); LoRa.print(counter); LoRa.endPacket(); counter++; delay(5000); } </code></pre> <p>Now if I am not mistaken every other <code>LoRa</code> device within the range of the signal that is also set up to the same frequency (<code>868E6</code>) can receive the data produced by the transmission device. Is my understanding correct? And if yes then how can we prevent unwanted devices to interfere with our system/setup? Should we use some kind of encryption? Maybe certificates?</p> <p>Thanks in advance to anyone that can help!</p>
2023-09-16T18:24:04.743
<p>When a TLS connection is established, the server will present a certificate, and possibly intermediate certificates.</p> <p>Your device should then decide if it trusts the presented certificate. It does so by having a trust store, where it holds certificates it… trusts.</p> <p>A certificate presented by the sever may be trusted if:</p> <ul> <li>The certificate itself is trusted. This is how you would be able to work with self-signed certificates, for instance.</li> <li>One can establish a certificate chain from that certificate to a certificate which is trusted. This is the most common case.</li> </ul> <p>In the latter case, the device will check who signed the certificate (it will give the identity of another certificate). It will try to find that certificate, either in the list of certificates presented by the server, or in the trust store.</p> <p>If it doesn’t find the signing certificate, then the original certificate cannot be trusted.</p> <p>Then it will check that the signature on the original certificate matches the signing certificate. It it doesn’t, again, the certificate won’t be trusted.</p> <p>[skipping a number of other checks such as validity dates, hash types and sizes, etc.]</p> <p>Now we know that the original certificate (A) was signed by another certificate (B). If B was in the trust store (more specifically, trusted to sign other certificates), then that’s it, since A is signed by B and we trust B, then we trust A.</p> <p>If B was not in the trust store but in the list of certificates presented by the server (an intermediate cert), then we start the process again, trying to find who signed this cert, whether we can trust it, etc.</p> <p>There are different ways of establishing what certificates your server will trust.</p> <p>The normal way in a public PKI, like that used to check domain names in browsers, is to have a (looooong) list of “root certificates” in the trust store. Each vendor, independently and/or in cooperation with the CA/Browser forum will decide which Certification Authorities (CAs) they trust, and thus which root certificates to include.</p> <p>That list is frequently updated, with new root certs added, old ones replaced, and sometimes certs removed altogether (when the device vendor and/or industry consider that the CA is not trustworthy and has issued certificates incorrectly, or is at risk of doing so).</p> <p>In some cases the private key of a cert may be leaked. In such cases the associated certificate can no longer be trusted.</p> <p>Also remember that when you renew your certificate, there is absolutely no guarantee that it will still be signed by the same chain of certificates, or even the same root.</p> <p>So the public PKI is a living thing. You just can’t rely on a small and fixed list of root certs if you use it.</p> <p>If your device has the ability to hold a full certificate store, you should have that, and include a way to keep it updated. You can for instance find curl’s CA list, extracted from Mozilla, <a href="https://curl.se/docs/caextract.html" rel="nofollow noreferrer">here</a>.</p> <p>The alternative is not to use the public PKI, but to have your own CA and/or certs.</p> <p>Some people will just store the server cert directly in the device. Others will set up they own CA, store the root cert of the CA in the device, and issue certificates signed by that CA for their server.</p> <p>In any case, it’s still a very good idea to be able to:</p> <ul> <li>Store multiple trusted certs</li> <li>Have a way to update that list</li> </ul> <p>The mechanism will be exactly identical to the public PKI case. The differences will be that:</p> <ul> <li>The list of trusted certs will be a lot shorter, so on very constrained devices that can help.</li> <li>The list of trusted certs should be updated a lot less often. Again, that could help for some devices with limited connectivity.</li> </ul> <p>The fact that your server is also accessible by browsers is not an issue: just point two different names to your server, and use different certificates, one from the public PKI and another from your own. Use the domain with the public PKI cert for generic users, the other for your devices.</p> <p>However this would fail if your devices end up behind a proxy or firewall (explicit or transparent).</p>
|https|tls|
TLS Certificate life for IoT System
6708
<p>I'm developing an IoT product's software, where it runs HTTPS client.</p> <p>Given that the certificate of servers are renewed every year, it's not that issue while we're authenticating against the Intermediate Certificate (as Let's Encrypt).<br /> Given those certificates have also a lifetime, so <strong>a solution might be to</strong> select a CA which have a long lifetime [Longer than the product lifetime (perhaps ~10 years)].</p> <p>Given this suggested solution, <strong>what are recommended CAs for meeting this criteria?</strong></p> <p>Alternatively, <strong>is it a workable approach to update the stored CA certificate before changing the server's certificate by 3 months of the expiration of both CA and the server's certificates?</strong></p> <p>About limitations: could the server hold two different certificates within this update period?</p>
2023-10-14T11:15:27.163
<p>If a hacker gets hold of your hardware, it is just a question of time and resources that the hacker has, before they can get your device key or certificate. And then, all data that is stored on the device, unless it is one-way encrypted, with no keys on the device to decrypt.<br /> So, you will not be able to really do much about that wifi password, except to spend a bit of time/resources of the hacker. In fact, such vectors are a threat to home security and hence, many suggest that home owners take the trouble to put all the devices on the guest network. It is painful, but gets a degree of security.<br /> But you have the device also uploading data to you after it gets on the wifi. The important thing is that if they do get that one device, they cannot then get access to or spoof your other devices using that key/certificate.<br /> So, you will have to create device specific keys. They can be derived from some master key or be random. In case of certificates, they can (and should) have the same chain of authority but be individual device specific certificates.<br /> In the case of IoT, there are provisions in the certificate, to put in a device serial number and then set policies at AWS so that a certificate cannot spoof another device serial number.<br /> The hacker will probably be able to spoof the same device and upload data to your cloud to try to cause a DOS (or DDoS, depending on your security policies) attack or cause an upload of unreasonable amounts of valid-looking data and try to poison aggregate reports, etc.<br /> You will have to consider those threats as well.<br /> But, device specific keys are a start.</p>
|arduino|cryptography|
If hackers get ahold of your physical IoT device, what's the purpose of using secure/crypto-chips?
6731
<p>Trying to design some secure firmware here and running into a brick wall regarding the use of crypto-chips. We are considering using this one here: <a href="https://www.microchip.com/en-us/product/ATECC608B" rel="nofollow noreferrer">https://www.microchip.com/en-us/product/ATECC608B</a>. All data on our RTL8720DN 2.4/5GHz chip is encrypted/decrypted using AES256 which is a big deal because we encrypt and save the customers home WiFi password on the device EEPROM. The issue that we are concerned about is that the IV and CBC keys are exposed in our program itself. The crypto-chip fixes that by coming pre-configured with the keys and, obviously, they are securely locked so that no one can get them out. Whenever we need to encrypt/decrypt something we just make a quick call to the crypto-chip to get the secure keys and we are good to go.</p> <p>We are already using SSL/TLS communications through the customers home WiFi to our cloud servers, etc. so don't need the crypto-chip to help us with certificates.</p> <p>Billion dollar question here: if a hacker breaks into a customers home and steals our device, can't they just decompile our code or even just flash their own code onto the device and &quot;programmatically&quot; extract the AES keys from the crypto-chip? That's basically what we do when we update our firmware via OTA. If that's the case, what's the difference between just leaving the keys in plain text in our program?</p>
2023-10-15T12:23:40.420
<p>Perhaps there is an error in the software, and the resolution is not <code>0.1</code>, but it is <code>1.0</code>.</p> <p>Setting the threshold to <code>1</code> may reveal such an error.</p>
|raspberry-pi|zwave|
aeotec aërQ temp/rh sensor or zwave-js not reporting state change
6733
<p>I have a simple iot app:</p> <pre><code>raspberry pi 4 aeotec Z-stick7 aeotec aërQ temp/humidity sensor mosquitto zwave-js-ui </code></pre> <p>The setup works fine, yet state changes are not being reported so often.</p> <p>Yesterday evening, circa 23:00, the sensor reported 22.7 degrees to <code>zwave-js-ui</code>. This morning, circa 08:00, 19.2 degrees. No reports in between.</p> <p>These measurements are accurate, however, the temperature will have fallen gradually during the night (the sensor is placed near a window that was opened partially at around 23:00 and the outdoor temperature sank till circa 8 degrees last night).</p> <p>The log in the <code>zwave-js-ui</code> webapp registers no events between 23:00 yesterday and 08:00 today.</p> <p>I'm guessing I need to tweek the parameter <code>[3-112-0-1] Temperature Change Report Threshold</code>?, which appears under <code>Control Panel -&gt; Node -&gt; Configuration v4</code></p> <p><a href="https://i.stack.imgur.com/lWWqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lWWqX.png" alt="screenshot of zwave-js-ui" /></a></p> <p>But to what?</p> <p>I'd like reports at least every degree of change.</p> <p>It's worth pointing out that I get reports if I quick push the button on the sensor.</p> <p>I found this post <a href="https://github.com/zwave-js/node-zwave-js/issues/2595" rel="nofollow noreferrer">https://github.com/zwave-js/node-zwave-js/issues/2595</a>, suggesting that this was an issue on older versions, called <code>zwavejs2mqtt</code>. The issue was unresolved, closed as stale.</p>
2023-10-29T14:18:17.437
<p>So I had to read back through <a href="https://zwave-js.github.io/zwave-js-ui/#/guide/mqtt?id=set-values" rel="nofollow noreferrer">the documentation</a>, however, it appears there is a <code>/set</code> that needs to be added to the topic path so instead of <code>zwave/Bedroom/Bedroom_Light/37/0/targetValue</code> is should be <code>zwave/Bedroom/Bedroom_Light/37/0/targetValue/set</code>. Also I could leave out the timestamp and just send this...</p> <pre><code>{&quot;value&quot;:true} </code></pre> <p>Now it works fine.</p>
|mqtt|zwave|
How do I control a device that works with Z-Wave JS UI via a published MQTT message?
6744
<p>I have a setup where I am running a simple mosquito server for MQTT. I then run the Z-Wave JS ui project to interact with my zwave devices. This works great and I see the messages showing up in MQTT for example here is me turning on then back off the light...</p> <pre><code>Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876065,&quot;value&quot;:1698588451955} 2023-10-29 10:14:36:043 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876065,&quot;value&quot;:1698588451955} 2023-10-29 10:14:36:043 Topic: zwave/Bedroom_Light/37/0/targetValueQoS: 0 {&quot;time&quot;:1698588876077,&quot;value&quot;:true} 2023-10-29 10:14:36:053 Topic: zwave/Bedroom_Light/37/0/targetValueQoS: 0 {&quot;time&quot;:1698588876077,&quot;value&quot;:true} 2023-10-29 10:14:36:054 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876079,&quot;value&quot;:true} 2023-10-29 10:14:36:054 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876079,&quot;value&quot;:true} 2023-10-29 10:14:36:055 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876094,&quot;value&quot;:true} 2023-10-29 10:14:36:068 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876094,&quot;value&quot;:true} 2023-10-29 10:14:36:068 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876316,&quot;value&quot;:1698588876093} 2023-10-29 10:14:36:292 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876316,&quot;value&quot;:1698588876093} 2023-10-29 10:14:36:293 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876494,&quot;value&quot;:1698588876093} 2023-10-29 10:14:36:467 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876494,&quot;value&quot;:1698588876093} 2023-10-29 10:14:36:468 Topic: zwave/Bedroom_Light/37/0/targetValueQoS: 0 {&quot;time&quot;:1698588876510,&quot;value&quot;:false} 2023-10-29 10:14:36:485 Topic: zwave/Bedroom_Light/37/0/targetValueQoS: 0 {&quot;time&quot;:1698588876510,&quot;value&quot;:false} 2023-10-29 10:14:36:485 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876511,&quot;value&quot;:false} 2023-10-29 10:14:36:485 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876511,&quot;value&quot;:false} 2023-10-29 10:14:36:486 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876526,&quot;value&quot;:false} 2023-10-29 10:14:36:502 Topic: zwave/Bedroom_Light/37/0/currentValueQoS: 0 {&quot;time&quot;:1698588876526,&quot;value&quot;:false} 2023-10-29 10:14:36:503 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876743,&quot;value&quot;:1698588876526} 2023-10-29 10:14:36:717 Topic: zwave/Bedroom_Light/lastActiveQoS: 0 {&quot;time&quot;:1698588876743,&quot;value&quot;:1698588876526} </code></pre> <p>Now with this working I would like to publish a message to turn on the light instead of using the web ui. To do this I try to publish the message I saw...</p> <pre><code>zwave/Bedroom_Light/37/0/targetValue {&quot;time&quot;:1698588451956,&quot;value&quot;:true} </code></pre> <p>But nothing happens and I don't see the current value message to confirm it was changing. What am I missing how do I get the ZWave device to toggle via MQTT?</p> <p>After playing a bit it seems to me like Z-Wave-JS-UI isn't properly handling the requests from MQTT. It seems to be publishing to it just fine. Still not sure what I am missing.</p>
2023-11-02T14:53:31.810
<p>I'm not sure where you got your classification, it seems a bit weird to me.</p> <p>There are dozens of 13.56 MHz RFID card types and sub-types, but some of the most common are:</p> <ul> <li><p>NTAG devices (could be actual cards, or take many other forms, especially stickers). Those are used mostly to &quot;publish&quot; an URL which can be read by a mobile phone for instance (like you would print a QR Code containing an URL), though like QR Codes, you could have all sors of data in there. NTAG devices have capacities in the dozens or hundreds of bytes. There are originally rewritable, but you can write-protect them after programming them.</p> </li> <li><p>Mifare cards. There are quite a few models (Classic, Ultralight, DESFire...) and submodels (EV1/EV2/EV3, 1K/2K/4K/8K, etc.). Those have storage capabilities, and depending on the models, various modes of encryption and protection. Mostly used for access badges and for ticketing in transportation systems.</p> </li> <li><p>EMV payment cards (credit cards, debit cards...). Usually dual mode (contact + contactless), they can contain multiple applications, perform PIN verification (and self-disable once you have reached the maximum number of wrong PINs), hold a balance, perform crypto operations (digital signatures...).</p> </li> <li><p>e-Passports and some ID cards and driver's licenses.</p> </li> <li><p>Smartphone can also emulate quite a few of the above (as well as read quite a few of the above).</p> </li> </ul> <p>There are quite a few other RFID cards and devices, but the above are probably the most common, at least in Europe and North America (in Japan and a few other countries you'll also find Felica, Icode, and others). Not all are <em>actually</em> NFC (NFC is only a subset of all 13.56 MHz RFID cards).</p> <p>The UID is present on all those cards, it is used in the anti-collision protocol. The UID is supposed to be static, unique and non-rewritable, but of course you now have random UIDs, re-used UIDs, and carts with modifiable UID (used to clone existing cards).</p> <p>Using the UID to identify a card is very simple and used a lot for non-security-sensitive applications. As soon as you need security, you can just forget about that, you'll need something more secure.</p> <p>Most cards can store a lot more information, it can range from a few bytes to tens of thousands. There are often ways to protect the data (e.g. on most Mifare cards you can save data which is protected and can only be read if you have the relevant keys). Some cards like payment cards and the like can use asymmetric crypto and store a private key which you can read (but you can check the card knows the correct key by sending a challenge and verifying the signature it returns with the matching public key/certificate).</p> <p>To answer your questions specifically:</p> <blockquote> <p>Is it possible to read more data?</p> </blockquote> <p>Yes, there are commands to do so.</p> <blockquote> <p>if so - what is the nature of this data? (and notably - is it random data?)</p> </blockquote> <p>It would be the data you stored there. Before you write any data, depending on the card, it could be just 0s or some other fixed pattern, or just random data. Just think of it as a (very small) hard drive or USB stick. It holds blocks, you write data to it, and you can later read it back (taking into account various levels of protections if there are any).</p> <p>Some cards may also have other read-only data, though the details vary from card to card, you would have to check the data sheet for the specific model. Other cards may have dynamic data as well (e.g. number of activations).</p> <p>Some cards are able to generate dynamic data with a counter and a signature involving that counter, other data, and a private key (so one can even on some of them generate a dynamic URL which changes at each read). Again, specific to each type of card.</p> <blockquote> <p>Are there different cards with the possibility to write &quot;only UID&quot; or &quot;UID and more data&quot;?</p> </blockquote> <p>I'm not aware of any card on which you could only write the UID. Most cards to not allow changing the UID. And there is nearly always more data you can write to. However many cards can be write-protected, so you can write data, then set the card to read-only (either permanently or until you provide a password you set at the same time, depending on the card). Read-only mode can be set either card-wide or per block, depending on the device.</p> <blockquote> <p>Is it possible to programmatically query/guess the type of card?</p> </blockquote> <p>Very generally it can quite difficult given the very large number of card types but there are ways to do it for specific cards. See for instance the <a href="https://www.nxp.com/docs/en/application-note/AN10833.pdf" rel="nofollow noreferrer">MIFARE type identification procedure</a></p> <p>For some cards/tags you can use the <a href="https://play.google.com/store/apps/details?id=com.nxp.taginfolite&amp;hl=en" rel="nofollow noreferrer">NFC TagInfo by NXP</a> app on Android which will try to determine the tag type and provide more information, though results vary quite a bit depending on the type of card/tag. There is also an iOS version but it provides less information, and has probably more limited compatibility.</p>
|arduino|nfc|
What kind of 13.56 MHz NFC cards are there?
6748
<p><em><strong>EDIT</strong>: when mentioning &quot;cards&quot; in the question below, I mean the kind of badges/cards you can but at Aliexpress that claim to be &quot;magic&quot; cards (or that ndo not claim anything and then are &quot;read-only&quot; ones)</em></p> <p>I am planning to use 13.56 MHz NFC cards to protect a lock. I will write the software part myself (this is not a problem, including the security aspects) but I have a hard time understanding NFC cards (in the context of my project). I will build the reader myself, based on a NodeMCU and an RC522 reader.</p> <p>I have done some reading and in the chaotic information I found (I must have not looked at the right place), and for the 13.56 MHz cards I have, my understanding is the following:</p> <ul> <li><p>&quot;static&quot; cards, with their information written once and not changeable. I <a href="https://www.aranacorp.com/en/using-an-rfid-module-with-an-esp8266/" rel="nofollow noreferrer">connected</a> the Arduino to the RC522 reader and can read a 4 bytes UID.<br /> <strong>Question 1</strong>: is it possible to read more data? (MIFARE cards apparently have 1kB of data)<br /> <strong>Question 2</strong>: if so - what is the nature of this data? (and notably - is it random data?)</p> </li> <li><p>&quot;rewritable&quot; cards (also called &quot;magic&quot;) where one can write to the card. The UID can be written to, but this also possible with more data (Home Assistant writes a UUID for instance).<br /> <strong>Question 3:</strong> Are there different cards with the possibility to write &quot;only UID&quot; or &quot;UID and more data&quot;?</p> </li> </ul> <p>And finally <strong>Question 4</strong>: is it possible to programmatically query/guess the type of card? (with a NodeMCU + RC522 but I am open to other combinations)</p> <p>That's a lot of questions, let me know if you want me to split them into 4 different posts.</p> <p>I have gathered over the years many NFC tags (some rewtitable, some not) and would like to put them to use but I first need to understand their nature.</p>
2023-11-15T04:09:02.943
<p>Finally had an answer rom teh installer. It may help others.</p> <blockquote> <p><strong>Lounge/Living HEOS:</strong> These 2x zones are on the Denon AVR. There is only one HEOS streamer input, so you can only switch zones on/off and not ungroup. All other HEOS zones are fully matrixed with a HEOS streamer per zone.</p> </blockquote>
|sound|
How can I ungroup these permanently grouped speakers on my Heos System
6752
<p>I am using a Heos / Denon system. I have speakers in each room with home theatre support in two rooms (Living and Lounge). For some reason these two rooms are permanently grouped together.</p> <p>I can group others and ungroup them. A can even drag another room into this fixed group and remove that room. I just cant separate the Lounge and Living. This causes confusion sometimes when the wrong speaker starts emitting sound after its companion is switched off.</p> <p>Normal drag out or pinch out has no effect on this set. How can I ungroup them?</p> <p><a href="https://i.stack.imgur.com/RCydU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RCydU.jpg" alt="image" /></a></p>
2023-11-26T03:51:59.683
<p>My assumptions:</p> <p>I think the pump is always supplied with power today and its work is to pressurize the supply at the output so that the faucets and the timed sprinklers can do their work at the times they need to.</p> <p>If that is true, you are looking for a solution that detects when there is a break in the fittings which would result in a loss of that pressure.</p> <p>If those chain of assumptions is true, then, maybe something like this would be a solution for you. (I am not affiliated in any way to this product. Just using it as an example. There are many others that are in the same solution space.)</p> <p><a href="https://rads.stackoverflow.com/amzn/click/com/B081HT5LD6" rel="nofollow noreferrer" rel="nofollow noreferrer">https://www.amazon.com/Moen-900-006-1-Inch-Smart-Shutoff/dp/B081HT5LD6/</a></p>
|smart-plugs|energy-monitoring-systems|
Outdoor smart plug with real-time energy monitoring and notification
6763
<p>I live in Australia. I have a garden bore with an electric bore pump for garden reticulation (irrigation). The bore supplies water to taps (faucets) in the garden, as well as sprinklers that are on separate timers. Consequently, power is continually supplied to the pump. On rare occasions one of the fittings snaps, as happened this morning, and water just continuously pumps out until we become aware of the breakage. This is bad enough if we are at home, but should it happen when no one is at home the pump might run for hours or even days. I could just put the bore pump on a smart plug and restrict the time that the power is supplied to the pump to the times required by the sprinklers, but that would mean manually turning on the power if we wanted to use a tap. What I would like is an outdoor smart plug that could not only remotely control power to the pump, but could also notify me when power is being consumed outside specified times. That way I could leave the power on all the time, as I currently do, but if I get a notification that power is being drawn at an unexpected time I could remotely turn the power off.</p> <p>Has anyone come across a smart plug with this capability?</p>
2023-12-10T20:20:19.740
<p>As mentioned, you can not use power cables for ethernet, otherwise why would there be special cables for it.</p> <p>You can use powerline adapters. The quality and speed of signal you get would be dependent on teh manufacturer and the price you pay. There are caveats in some ads for them, so read it carefully.</p> <p>You need at either end of the same cable (ie phase, without too many joins). If you are only going to use the ethernet for smart switches, then it will work just fine. For anything else, it is trial and error.</p> <p>You can also use better quality WAPs or even long distance ones. I am using an outdoor unit by Unify that easily reaches 100m.</p>
|ethernet|wired|
Can I nut-in an Ethernet connector (from 28 gauge) to 12 gauge so that I can run Ethernet out to my detached garage?
6779
<p>I have a detached garage that has its own electrical panel. The previous owner of the house was an electrician and it's done really well, however, there is not a very strong Wi-Fi signal in the garage.</p> <p>I would like to have some smart outlets out there and such. There are 2 lines that run through the pipe that I can use (one was a 4-wire line for a house-switch to the eave lights on the garage, which also had a switch, and the other is a 3-wire spare). Currently, all, but one of the seven lines is being used by the sign off smart switch relay.</p> <p>I was in the process today of swapping out the sonoff mini with a UL certified Shelly (because the sonoff failed electrical inspection, since it is not UL, certified), and I thought, &quot;boy, I wish there was an ethernet line already running through this electrical pipe out to the garage, then I could have a wired backhaul out there&quot;.</p> <p>I started thinking about it, and I thought, &quot;huh, what if I actually just spliced in an ethernet connector and hooked up ethernet through these 12 gauge wires?&quot;</p> <p>So I started googling and I learned about powerline ethernet adapters. I'm not sure if that would be the same thing as me just splicing in ethernet connectors or not. There's no outlet involved.</p> <p>I have read that you don't get very good speeds with powerline adapters, which is not an issue. I just want a couple of smart outlets and switches, etc.. and these two lines are just two of many that run out to the garage. Other than those, there are 200 W of other lines going out there that connect to the panel in the garage.</p> <p>The 12 gauge wires I have are threaded.</p> <p>So could this work? Can I just spice in a couple of ethernet connectors?</p> <p>What are the upsides and downsides of doing this? I understand that there should be issues with regard to shielding and data loss. I don't need it to be very fast, I just need it to be reliable.</p>
2024-01-21T10:55:18.167
<p>I just tried and my 4 devices dont do this. On the app, you can -</p> <ul> <li>Press More Icon</li> <li>Select Activity History</li> <li>Select Voice</li> </ul> <p>Here you can see your commands and what Alexa interpreted it as. To tell it that it erred -</p> <ul> <li>Press on the Down Arrow/Chevron</li> <li>Press on Thumbs Down to tell it, that it got it wrong.</li> </ul>
|smart-home|alexa|
Why is Alexa telling me to dial emergency services?
7819
<p>When I tell Alexa to be quiet, it often, but not always, responds with this.</p> <p><a href="https://youtu.be/DZssCjoTg-0" rel="nofollow noreferrer">https://youtu.be/DZssCjoTg-0</a></p> <p>We have our wake word set to &quot;computer&quot; because Star Trek. As you can see from the video, it does it most of the time, but not consistently every time. I was wondering whether it is just me, but it also does the same for my wife. Other Echo Dot units in different locations also do the same.</p> <p><strong>Questions</strong></p> <ol> <li>Why does it give this answer?</li> <li>How can I report this to Amazon?</li> </ol>
2024-02-03T04:38:20.983
<p>On being prompted in comments, I have two ways of doing it. It is worth noting that the <code>Alarms</code>, <code>Timers</code> and <code>Notifications</code> share the volume level.</p> <p><strong>Voice</strong></p> <ul> <li>&quot;Alexa, set volume for notifications to 6&quot;</li> <li>or &quot;Alexa, increase the volume for notifications&quot;</li> </ul> <p><strong>On the App in 2024</strong></p> <ul> <li>Open Alex app</li> <li>Select <code>Devices</code> on the bottom</li> <li>Press on the drop down <code>Device Type</code></li> <li>Select <code>Echo &amp; Alexa</code> and press <code>Apply</code></li> <li>Scroll and select each Alexa device in turn</li> <li>Select <code>⚙️</code> settings</li> <li>Scroll down to select <code>Sounds</code> in the General section</li> <li>On the top change the slider for <code>Alarms, Timers and Notifications</code> to the desired level.</li> </ul>
|amazon-echo|sound|
How can tell Alexa to use a different sound level for Notifications
7826
<p>I have a Fibaro Home Automation system with multiple Alexa devices. When someone leaves any of the 3 garage doors open, I have Alexa automatically notify us between 8pm and 11pm, hourly that the garage door is still open.</p> <p>This has been working fine for months. All of a sudden these Notifications are whisper quiet on all the devices. They are not automatically adjusted for the background noise level either. If the TV is on, then I almost never hear them.</p> <p><strong>Is there a way to get Alexa to use the same volume level for everything it does.</strong></p> <p>This sounds like <a href="https://iot.stackexchange.com/questions/1750/can-alexa-be-trained-to-use-preferred-volume-levels">this question</a> but its different in that I don't want to customise it, I just want everything the same level. And its seven years later.</p>
2024-02-12T16:21:10.057
<p>You will have to enable networking between the two.</p> <blockquote> <p>Try this:</p> <p>Setup the virtualbox to use 2 adapters: The first adapter is set to NAT (that will give you the internet connection). The second adapter is set to host only. Start the virtual machine and assign a static IP for the second adapter in Ubuntu (for instance 192.168.56.56). The host Windows will have 192.168.56.1 as IP for the internal network (VirtualBox Host-Only Network is the name in network connections in Windows). What this will give you is being able to access the apache server on ubuntu, from windows, by going to 192.168.56.56. Also, Ubuntu will have internet access, since the first adapter (set to NAT) will take care of that. Now, to make the connection available both ways (accessing the windows host from the ubuntu guest) there's still one more step to be performed. Windows will automatically add the virtualbox host-only network to the list of public networks and that cannot be changed. This entails that the firewall will prevent proper access. To overcome this and not make any security breaches in your setup: go to the windows firewall section, in control panel, click on advanced settings. In the page that pops up, click on inbound rules (left column), then on new rule (right column). Chose custom rule, set the rule to allow all programs, and any protocol. For the scope, add in the first box (local IP addresses) 192.168.56.1, and in the second box (remote IP) 192.168.56.56. Click next, select allow the connection, next, check all profiles, next, give it a name and save. That's it, now you have 2 way communication, with apache/any other service available as well as internet. The final step is to setup a share. Do not use the shared folders feature in virtualbox, it's quite buggy especially with windows 7 (and 64 bit). Instead use samba shares</p> <ul> <li>fast and efficient.</li> </ul> <p>Follow this link for how to set that up: <a href="https://wiki.ubuntu.com/MountWindowsSharesPermanently" rel="nofollow noreferrer">https://wiki.ubuntu.com/MountWindowsSharesPermanently</a></p> </blockquote> <p>Refer to <a href="https://serverfault.com/questions/225155/virtualbox-how-to-set-up-networking-so-both-host-and-guest-can-access-internet">virtualbox networking</a></p> <p>PS: I recently joined in and do not have enough reputation to put this in comment. Hence, putting this in an answer. Hope this solves your problem.</p>
|raspberry-pi|
Trouble pinging between host (Windows) and guest OS (Raspberry Pi) in VirtualBox
7833
<p>I am working on an IoT project where I have set up a Raspberry Pi as a guest OS in VirtualBox on my Windows host machine. Both the host and the guest OS have been assigned static IP addresses: <strong>192.168.56.1</strong> for the host and <strong>192.168.56.2</strong> for the guest.</p> <p>I am facing issues with network connectivity between the host and the guest OS. When I attempt to ping the guest OS (192.168.56.2) from the host, I receive the following message:</p> <blockquote> <p>Pinging 192.168.56.2 with 32 bytes of data:<br /> Reply from 192.168.56.1: Destination host unreachable.<br /> Request timed out.<br /> Request timed out.<br /> Request timed out.</p> <p>Ping statistics for 192.168.56.2: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss).</p> </blockquote> <p>And when I try to ping the host (192.168.56.1) from the guest OS, I get the error message:</p> <blockquote> <p>ping: connect: Network is unreachable</p> </blockquote> <p>I have configured the network settings in VirtualBox to use a bridged adapter.</p> <p>Could someone help me troubleshoot this issue? I'm unsure why the ping requests are failing despite configuring the network settings correctly. Any insights or suggestions would be greatly appreciated.</p>
2024-02-19T23:42:31.493
<p>Yes its possible within the limits that exists in the vehicle network ecosystem. Like one of the answer suggested you can plug a OBD2 reader and connect with it to get the readings.</p> <p>This has it's limitations though. The vehicle come with fuel level sensors but the values are not always broadcasted or made available to the BCM (body control module) with which these OBD readers communicate. The ability to be able to read such data gets a little complicated. You will ideally have to tap into the CAN network directly from the OBD2 port (pin 6 &amp; 14) and reverse engineer the function address (jargon for vehicle CAN communication) that the fuel level is being mapped to inside this particular vehicle network. And to your surprise, these IDs are not standard across different manufactures or models. So, it becomes a long journey of reverse engineering. But assuming, if you are able to do that - then it is just a matter of writing a CAN payload targeting the function address to get the value in response.</p> <p><a href="https://i.stack.imgur.com/J2nrw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J2nrw.png" alt="enter image description here" /></a></p>
|sensors|wifi|mobile-applications|
Car fuel level sensor remotely monitoring
7837
<p><a href="https://www.google.com/search?q=car%20fuel%20level%20sensor&amp;oq=Automobilefuel%20leve&amp;gs_lcrp=EgZjaHJvbWUqCAgCEAAYFhgeMgYIABBFGDkyCggBEAAYDxgWGB4yCAgCEAAYFhgeMggIAxAAGBYYHjIICAQQABgWGB4yCAgFEAAYFhgeMggIBhAAGBYYHjIKCAcQABgPGBYYHjIICAgQABgWGB4yDQgJEAAYhgMYgAQYigUyDQgKEAAYhgMYgAQYigUyDQgLEAAYhgMYgAQYigUyDQgMEAAYhgMYgAQYigUyDQgNEAAYhgMYgAQYigXSAQkxMTkwNGowajGoAgCwAgA&amp;client=ms-android-samsung-ss&amp;sourceid=chrome-mobile&amp;ie=UTF-8" rel="nofollow noreferrer">Car fuel level sensor</a></p> <p>Car fuel level monitoring is implemented and viewed on the dashboard with assistance of fuel level sensor inside the Automobile.</p> <p>Is it possible to remotely monitor a running automobile's fuel level with assistance of Smartphone apps,WI-FI, Internet of Things (IOT)?</p> <p>If yes, how?</p> <p>If no, why?</p>
2024-03-09T13:27:34.033
<p>Well, boy was I surprised when this morning, the light turned on as planned even though the automation didn't work when testing it.</p> <p>My assumption would be that the sensor's data values are not propagated in time in HA? All in all, the &quot;Next Alarm&quot; sensor had the updated value in time when testing - and the sensor update schedule says &quot;immediate&quot; in the app. Leaves as the only explanation that the sensor's data is not immediately available to automations.</p> <p>Can anyone confirm/correct this hypothesis?</p>
|sensors|home-assistant|
HomeAssistant: Companion App sensors won't trigger automation
7864
<p>I use the latest version of app (2024.1.5-full) and HA (2024.3.0) and want to use the <code>next alarm</code> sensor as a trigger for an automation in HA. So I</p> <ul> <li>Enabled that sensor on the phone (it grabs the correct time of the next alarm),</li> <li>Added Android System and the alarm app I use to the <code>allow list</code> (nor sure if actually necessary, but oh well...),</li> <li>Set the sensor update frequency to <code>fast while charging</code> and connect my phone to a charger to make sure that sensors are updated every minute,</li> <li>Gave the HA app <code>background access</code> to continuously run in the background,</li> <li>Activated the <code>next alarm</code> sensor in the <code>mobile app</code> integration, and</li> <li>Verified, that the phone's sensor changes show up in HA in the logbook (they do and they show the correct values, e.g. <code>2024-03-11T07:00:00+00:00</code>)</li> </ul> <p>Now, I have created an automation for &quot;time&quot; as in <code>When the time is equal to the entity my_phone</code>. Next alarm and add my automation actions:</p> <pre><code>alias: Morning light description: &quot;&quot; trigger: - platform: time at: sensor.my_phone_next_alarm condition: [] action: - service: light.turn_on metadata: {} data: brightness: 120 transition: 15 target: device_id: 019abc309c83cf3152e60b37ab1ea554 mode: single </code></pre> <p>When I select &quot;run&quot; to manually test this automation, everything works as intended. Only problem is: the automation is not triggered when the alarm time is reached.</p> <p>Moreover: When I add the condition that the automation should only be executed when my phone is connected to my local WiFi (sensor &quot;WiFi Connection&quot;), it ignores the condition apparently:</p> <pre><code>alias: Morning light description: &quot;&quot; trigger: - platform: time at: sensor.my_phone_next_alarm condition: - condition: state entity_id: sensor.my_phone_wifi_connection state: MyAwesomeWiFiSSID action: - service: light.turn_on metadata: {} data: brightness: 120 transition: 15 target: device_id: 019abc309c83cf3152e60b37ab1ea554 mode: single </code></pre> <p>With this second example, the light turns on when I run the automation manually - EVEN IF I previously disconnect my phone from my WiFi network (i.e. they shouldn't turn on because of said condition). Without running all that manually, the automation is obviously not triggered at all as the first example wasn't, too.</p> <p>Any advice on this? How do I get this automation to work properly?</p>
2024-03-18T14:58:55.210
<p>Yes you can. My suggestion is to use docker. Just install it on ur machine <a href="https://docs.docker.com/desktop/install/mac-install/" rel="nofollow noreferrer">https://docs.docker.com/desktop/install/mac-install/</a> and then use docker to deploy Home Assistant <a href="https://www.home-assistant.io/installation/" rel="nofollow noreferrer">https://www.home-assistant.io/installation/</a></p>
|home-assistant|
Can I run Home Assistant natively on an old macbook?
7872
<p>I have an old macbook, that I don't use for anything. It sits on a shelf. It occurred to me that I could maybe run Home Assistant on it? But when I look at the install guide, and see they point to installing it in a vm, and I google it to see if there are native installs, but they all say &quot;run it in a VM&quot;</p> <p>I don't need a VM, and am not sure if the old macbook could really run virtualization, as it is kind of slow - but this old macbook is better than a raspberry pi, I think.</p> <p>Shouldn't it be able to run it natively?</p> <p>EDIT: The macbook is a <a href="https://support.apple.com/en-ca/112442" rel="nofollow noreferrer">Macbook Retina 12inch, Early 2015</a>, and it seems quite slow, and doesn't get updates (running Mojave, 10.14)</p>