haixuantao commited on
Commit
12d535c
0 Parent(s):

Initial commit

Browse files

Drivers

Adding operators

Remove pptx

.gitattributes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ *.arrow filter=lfs diff=lfs merge=lfs -text
2
+ *.mkv filter=lfs diff=lfs merge=lfs -text
3
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ graphs/yolov5n.pt
2
+ *.pt
3
+ operators/__pycache__/
4
+ __pycache__/
5
+ *.avi
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dora-Robomaster
2
+
3
+ This project aims to use Dora to enhance the capabilities of a RoboMaster S1.
4
+ You can see the presentation and demos in the folder Presentation_dora/
5
+
6
+ ### Getting Started
7
+
8
+ command to start the demo:
9
+
10
+ ```bash
11
+ alias dora='dora-cli'
12
+ dora up
13
+ dora start graphs/dataflow.yml --attach
14
+ ```
15
+
16
+ start the reaction lighting test:
17
+ `dora start graphs/reaction.yml --attach`
18
+
19
+ ## Installation of the Robomaster S1 Hack
20
+
21
+ This guide is an updated version of the original [Robomaster S1 SDK Hack Guide](https://www.bug-br.org.br/s1_sdk_hack.zip) and is intended for use on a Windows 11 system.
22
+
23
+ ### Prerequisites
24
+
25
+ Before you get started, you'll need the following:
26
+
27
+ - Robomaster S1 (do not update it to the latest version, as it may block the hack).
28
+ - [Robomaster App](https://www.dji.com/fr/robomaster-s1/downloads).
29
+ - [Android SDK Platform-Tools](https://developer.android.com/tools/releases/platform-tools). Simply unzip it and keep the path handy.
30
+ - A micro USB cable. If this guide doesn't work, there might be an issue with the cable, and you may need to replace it with one that supports data transfer.
31
+
32
+ ### Instructions
33
+
34
+ 1. Start the Robomaster App and connect the Robomaster S1 using one of the two options provided (via router or via Wi-Fi).
35
+ 2. While connected, use a micro USB cable to connect the robot to the computer's USB port. You should hear a beep sound, similar to when you connect any device. (Please note that no other Android device should be connected via USB during this process).
36
+ 3. In the Lab section of the app, create a new Python application and paste the following code:
37
+
38
+ ```python
39
+ def root_me(module):
40
+ __import__ = rm_define.__dict__['__builtins__']['__import__']
41
+ return __import__(module, globals(), locals(), [], 0)
42
+
43
+ builtins = root_me('builtins')
44
+ subprocess = root_me('subprocess')
45
+ proc = subprocess.Popen('/system/bin/adb_en.sh', shell=True, executable='/system/bin/sh', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
46
+ ```
47
+
48
+ 4. Run the code; there should be no errors, and the console should display **Execution Complete**
49
+ 5. Without closing the app, navigate to the folder containing the Android SDK Platform-Tools and open a terminal inside it.
50
+ 6. Run the ADP command `.\adb.exe devices `. If everything is working correctly, you should see output similar to this: ![image](https://github.com/Felixhuangsiling/Dora-Robomaster/assets/77993249/dc6368ec-052c-4b18-8fdc-0ec314adb073)
51
+ 7. Execute the upload.sh script located in the folder `s1_SDK`.
52
+ 8. Once everything has been executed, restart the S1 by turning it off and then back on. While it's booting up, you should hear two chimes instead of the usual single chime, indicating that the hack has been successful.
graphs/dataflow.yml ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nodes:
2
+ # - id: robot
3
+ # operator:
4
+ # python: ../operators/robot.py
5
+ # inputs:
6
+ # blaster:
7
+ # source: planning/blaster
8
+ # queue_size: 1
9
+ # led:
10
+ # source: planning/led
11
+ # queue_size: 1
12
+ # control:
13
+ # source: planning/control
14
+ # queue_size: 1
15
+ # gimbal_control:
16
+ # source: planning/gimbal_control
17
+ # queue_size: 1
18
+ # tick:
19
+ # source: dora/timer/millis/100
20
+ # queue_size: 1
21
+ # outputs:
22
+ # - position
23
+ # - id: bot_plot
24
+ # operator:
25
+ # python: ../operators/plot.py
26
+ # inputs:
27
+ # image: bot_webcam/image
28
+ # bbox: object_detection/bbox
29
+ # - id: bot_webcam
30
+ # custom:
31
+ # source: ../operators/opencv_stream.py
32
+ # outputs:
33
+ # - image
34
+
35
+ - id: object_detection
36
+ operator:
37
+ python: ../operators/object_detection.py
38
+ inputs:
39
+ image:
40
+ source: webcam/image
41
+ queue_size: 1
42
+ outputs:
43
+ - bbox
44
+
45
+ ### Second Camera
46
+ - id: webcam
47
+ operator:
48
+ python: ../operators/webcam.py
49
+ inputs:
50
+ tick:
51
+ source: dora/timer/millis/50
52
+ queue_size: 1
53
+ outputs:
54
+ - image
55
+
56
+ - id: plot_webcam
57
+ operator:
58
+ python: ../operators/plot.py
59
+ inputs:
60
+ image: webcam/image
61
+ text: whisper/text
62
+ bbox: object_detection/bbox
63
+
64
+
65
+ # - id: plot_bot
66
+ # operator:
67
+ # python: ../operators/plot.py
68
+ # inputs:
69
+ # image: bot_webcam/image
70
+ # text: whisper/text
71
+ # bbox: object_detection/bbox
72
+
73
+ # - id: planning
74
+ # operator:
75
+ # python: ../operators/planning_op.py
76
+ # inputs:
77
+ # position: robot/position
78
+ # bbox: object_detection/bbox
79
+ # outputs:
80
+ # - control
81
+ # - gimbal_control
82
+ # - led
83
+ # - blaster
84
+
85
+ ## Speech to text
86
+ - id: keyboard
87
+ custom:
88
+ source: ../operators/keybinding_op.py
89
+ outputs:
90
+ - mic_on
91
+ - cancel
92
+ - failed
93
+
94
+ - id: microphone
95
+ operator:
96
+ python: ../operators/microphone_op.py
97
+ inputs:
98
+ mic_on: keyboard/mic_on
99
+ outputs:
100
+ - audio
101
+
102
+ - id: whisper
103
+ operator:
104
+ python: ../operators/whisper_op.py
105
+ inputs:
106
+ audio: microphone/audio
107
+ outputs:
108
+ - text
109
+
110
+ ## Code Modifier
111
+ - id: vectordb
112
+ operator:
113
+ python: ../operators/sentence_transformers_op.py
114
+ inputs:
115
+ query: whisper/text
116
+ saved_file: file_saver/saved_file
117
+ outputs:
118
+ - raw_file
119
+
120
+ - id: mistral
121
+ operator:
122
+ python: ../operators/mistral_op.py
123
+ # python: ../operators/chatgpt_op.py
124
+ inputs:
125
+ raw_file: vectordb/raw_file
126
+ outputs:
127
+ - output_file
128
+
129
+ - id: chatgpt
130
+ operator:
131
+ python: ../operators/chatgpt_op.py
132
+ inputs:
133
+ raw_file: vectordb/raw_file
134
+ outputs:
135
+ - output_file
136
+
137
+ - id: file_saver
138
+ operator:
139
+ python: ../operators/file_saver_op.py
140
+ inputs:
141
+ chatgpt_output_file: chatgpt/output_file
142
+ mistral_output_file: mistral/output_file
143
+ cancel: keyboard/cancel
144
+ failed: keyboard/failed
145
+ outputs:
146
+ - saved_file
147
+
148
+ - id: dora-record
149
+ custom:
150
+ source: dora-record
151
+ inputs:
152
+ chatgpt_output_file: chatgpt/output_file
153
+ mistral_output_file: mistral/output_file
154
+ raw_file: vectordb/raw_file
155
+ saved_file: file_saver/saved_file
156
+
graphs/merge.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ import pyarrow as pa
2
+
3
+ with pa.memory_map("mistral_output_file.arrow", "r") as source:
4
+ df_i = pa.RecordBatchStreamReader(source).read_all()
5
+
6
+ df_i = df_i.to_pandas()
7
+ df_b = df_b.to_pandas()
8
+
9
+ df = df_i.merge(df_b, on="trace_id")
graphs/mistral_output_file.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:764707ce8c55e898c8bd577a725af16a2fe931f83cb597cd0cb9f768214e9667
3
+ size 29904
graphs/raw_file.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13ea7b9c4a305d07c94616c87baa41c1c195cc67930cd57fb2e3010eea50c6d6
3
+ size 29912
graphs/reaction.yml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nodes:
2
+ - id: robot
3
+ operator:
4
+ python: ../operators/robot.py
5
+ inputs:
6
+ blaster:
7
+ source: planning/blaster
8
+ queue_size: 1
9
+ led:
10
+ source: planning/led
11
+ queue_size: 1
12
+ tick:
13
+ source: dora/timer/millis/50
14
+ queue_size: 1
15
+ outputs:
16
+ - image
17
+ - position
18
+ - id: plot
19
+ operator:
20
+ python: ../operators/plot.py
21
+ inputs:
22
+ image: robot/image
23
+ bbox: object_detection/bbox
24
+ - id: object_detection
25
+ operator:
26
+ python: ../operators/object_detection.py
27
+ inputs:
28
+ image:
29
+ source: robot/image
30
+ queue_size: 1
31
+ outputs:
32
+ - bbox
33
+ - id: planning
34
+ operator:
35
+ python: ../operators/reaction_op.py
36
+ inputs:
37
+ bbox: object_detection/bbox
38
+ outputs:
39
+ - led
40
+ - blaster
graphs/saved_file.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0ae045e300e026d2ceccc701b0301562d8a58b4a0de2994b5fcd2e1ed87cf5b
3
+ size 14976
operators/chatgpt_op.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from openai import OpenAI
3
+
4
+
5
+ def ask_gpt(prompt, raw):
6
+ client = OpenAI()
7
+
8
+ Prompt = (
9
+ "this is a python code :\n"
10
+ + "```python\n"
11
+ + raw
12
+ + "```\n"
13
+ + prompt
14
+ + "Format your response by: Showing the whole modified code. No explanation is required. Only code."
15
+ )
16
+
17
+ response = client.chat.completions.create(
18
+ model="gpt-4-1106-preview",
19
+ messages=[
20
+ {"role": "system", "content": "You are a helpful assistant."},
21
+ {"role": "user", "content": Prompt},
22
+ ],
23
+ )
24
+
25
+ answer = response.choices[0].message.content
26
+ return answer
27
+
28
+
29
+ def extract_command(gptCommand):
30
+ blocks = []
31
+ temp = ""
32
+ writing = False
33
+
34
+ for line in gptCommand.splitlines():
35
+ if line == "```":
36
+ writing = False
37
+ blocks.append(temp)
38
+ temp = ""
39
+
40
+ if writing:
41
+ temp += line
42
+ temp += "\n"
43
+
44
+ if line == "```python":
45
+ writing = True
46
+
47
+ return blocks
48
+
49
+
50
+ def save_as(content, path):
51
+ # use at the end of replace_2 as save_as(end_result, "file_path")
52
+ with open(path, "w") as file:
53
+ file.write(content)
54
+
55
+
56
+ import pyarrow as pa
57
+
58
+ from dora import DoraStatus
59
+
60
+
61
+ class Operator:
62
+ """
63
+ Infering object from images
64
+ """
65
+
66
+ def on_event(
67
+ self,
68
+ dora_event,
69
+ send_output,
70
+ ) -> DoraStatus:
71
+ # todo: remove this
72
+ return DoraStatus.CONTINUE
73
+
74
+ if dora_event["type"] == "INPUT":
75
+ input = dora_event["value"][0].as_py()
76
+ with open(input["path"], "r", encoding="utf8") as f:
77
+ raw = f.read()
78
+ print("--- Asking chatGPT ", flush=True)
79
+ response = ask_gpt(input["query"], raw)
80
+ blocks = extract_command(response)
81
+ print(response, flush=True)
82
+ print(blocks[0], input["path"], flush=True)
83
+ send_output(
84
+ "output_file",
85
+ pa.array(
86
+ [{"raw": blocks[0], "path": input["path"], "gen_output": response}]
87
+ ),
88
+ dora_event["metadata"],
89
+ )
90
+
91
+ return DoraStatus.CONTINUE
92
+
93
+
94
+ if __name__ == "__main__":
95
+ op = Operator()
96
+
97
+ # Path to the current file
98
+ current_file_path = __file__
99
+
100
+ # Directory of the current file
101
+ current_directory = os.path.dirname(current_file_path)
102
+
103
+ path = current_directory + "/planning_op.py"
104
+ with open(path, "r", encoding="utf8") as f:
105
+ raw = f.read()
106
+
107
+ op.on_event(
108
+ {
109
+ "type": "INPUT",
110
+ "id": "tick",
111
+ "value": pa.array(
112
+ [
113
+ {
114
+ "raw": raw,
115
+ "path": path,
116
+ "query": "Can you change the RGB to change according to the object distances",
117
+ }
118
+ ]
119
+ ),
120
+ "metadata": [],
121
+ },
122
+ print,
123
+ )
operators/file_saver_op.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pyarrow as pa
2
+
3
+ from dora import DoraStatus
4
+
5
+
6
+ class Operator:
7
+ """
8
+ Infering object from images
9
+ """
10
+
11
+ def __init__(self):
12
+ self.last_file = ""
13
+ self.last_path = ""
14
+ self.last_netadata = None
15
+
16
+ def on_event(
17
+ self,
18
+ dora_event,
19
+ send_output,
20
+ ) -> DoraStatus:
21
+ if dora_event["type"] == "INPUT" and dora_event["id"] == "mistral_output_file":
22
+ input = dora_event["value"][0].as_py()
23
+
24
+ with open(input["path"], "r") as file:
25
+ self.last_file = file.read()
26
+ self.last_path = input["path"]
27
+ self.last_metadata = dora_event["metadata"]
28
+ with open(input["path"], "w") as file:
29
+ file.write(input["raw"])
30
+
31
+ send_output(
32
+ "saved_file",
33
+ pa.array(
34
+ [
35
+ {
36
+ "raw": input["raw"],
37
+ "path": input["path"],
38
+ "origin": dora_event["id"],
39
+ }
40
+ ]
41
+ ),
42
+ dora_event["metadata"],
43
+ )
44
+ if dora_event["type"] == "INPUT" and dora_event["id"] in ["cancel", "failed"]:
45
+
46
+ with open(self.last_path, "w") as file:
47
+ file.write(self.last_file)
48
+
49
+ send_output(
50
+ "saved_file",
51
+ pa.array(
52
+ [
53
+ {
54
+ "raw": self.last_file,
55
+ "path": self.last_path,
56
+ "origin": dora_event["id"],
57
+ }
58
+ ]
59
+ ),
60
+ self.last_metadata,
61
+ )
62
+ return DoraStatus.CONTINUE
operators/keybinding_op.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pynput
2
+ import pyarrow as pa
3
+
4
+ from dora import Node
5
+
6
+ node = Node()
7
+
8
+
9
+ def on_key_release(key):
10
+ try:
11
+ if key.char == "1":
12
+ print("Key 'm' pressed up")
13
+ node.send_output("mic_on", pa.array([]))
14
+ elif key.char == "2":
15
+ print("Key '2' pressed up")
16
+ node.send_output("cancel", pa.array([]))
17
+ elif key.char == "3":
18
+ print("Key '3' pressed up")
19
+ node.send_output("failed", pa.array([]))
20
+ elif key.char == "0":
21
+ exit()
22
+
23
+ except AttributeError:
24
+ pass
25
+
26
+
27
+ pynput.keyboard.Listener(on_release=on_key_release).run()
operators/microphone_op.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Run this in the consol first :
2
+
3
+ # pip install sounddevice numpy
4
+
5
+ # Don't forget to install whisper
6
+
7
+ import time
8
+
9
+ import numpy as np
10
+ import pyarrow as pa
11
+ import sounddevice as sd
12
+
13
+ from dora import DoraStatus
14
+
15
+ # Set the parameters for recording
16
+ SAMPLE_RATE = 16000
17
+ MAX_DURATION = 1
18
+
19
+
20
+ class Operator:
21
+ """
22
+ Infering object from images
23
+ """
24
+
25
+ def on_event(
26
+ self,
27
+ dora_event,
28
+ send_output,
29
+ ) -> DoraStatus:
30
+ if dora_event["type"] == "INPUT":
31
+ audio_data = sd.rec(
32
+ int(SAMPLE_RATE * MAX_DURATION),
33
+ samplerate=SAMPLE_RATE,
34
+ channels=1,
35
+ dtype=np.int16,
36
+ blocking=False,
37
+ )
38
+ time.sleep(MAX_DURATION)
39
+
40
+ audio_data = audio_data.ravel().astype(np.float32) / 32768.0
41
+ if len(audio_data) > 0:
42
+ send_output("audio", pa.array(audio_data), dora_event["metadata"])
43
+ return DoraStatus.CONTINUE
operators/mistral_op.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dora import DoraStatus
2
+ import pylcs
3
+ import textwrap
4
+ import pandas as pd
5
+ import os
6
+ import pyarrow as pa
7
+ import numpy as np
8
+ from ctransformers import AutoModelForCausalLM
9
+
10
+ MIN_NUMBER_LINES = 4
11
+ MAX_NUMBER_LINES = 21
12
+
13
+
14
+ def search_most_simlar_line(text, searched_line):
15
+ lines = text.split("\n")
16
+ values = []
17
+
18
+ for line in lines[MIN_NUMBER_LINES:MAX_NUMBER_LINES]:
19
+ values.append(pylcs.edit_distance(line, searched_line))
20
+ output = lines[np.array(values).argmin() + MIN_NUMBER_LINES]
21
+ return output
22
+
23
+
24
+ def strip_indentation(code_block):
25
+ # Use textwrap.dedent to strip common leading whitespace
26
+ dedented_code = textwrap.dedent(code_block)
27
+
28
+ return dedented_code
29
+
30
+
31
+ def replace_code_with_indentation(original_code, replacement_code):
32
+ # Split the original code into lines
33
+ lines = original_code.splitlines()
34
+ if len(lines) != 0:
35
+ # Preserve the indentation of the first line
36
+ indentation = lines[0][: len(lines[0]) - len(lines[0].lstrip())]
37
+
38
+ # Create a new list of lines with the replacement code and preserved indentation
39
+ new_code_lines = indentation + replacement_code
40
+ else:
41
+ new_code_lines = replacement_code
42
+ return new_code_lines
43
+
44
+
45
+ def replace_source_code(source_code, gen_replacement):
46
+ initial = search_most_simlar_line(source_code, gen_replacement)
47
+ print("Initial source code: %s" % initial)
48
+ replacement = strip_indentation(
49
+ gen_replacement.replace("```python\n", "")
50
+ .replace("\n```", "")
51
+ .replace("\n", "")
52
+ )
53
+ intermediate_result = replace_code_with_indentation(initial, replacement)
54
+ print("Intermediate result: %s" % intermediate_result)
55
+ end_result = source_code.replace(initial, intermediate_result)
56
+ return end_result
57
+
58
+
59
+ def save_as(content, path):
60
+ # use at the end of replace_2 as save_as(end_result, "file_path")
61
+ with open(path, "w") as file:
62
+ file.write(content)
63
+
64
+
65
+ class Operator:
66
+ def __init__(self):
67
+ # Load tokenizer
68
+ self.llm = AutoModelForCausalLM.from_pretrained(
69
+ "TheBloke/OpenHermes-2.5-Mistral-7B-GGUF",
70
+ model_file="openhermes-2.5-mistral-7b.Q4_K_M.gguf",
71
+ model_type="mistral",
72
+ gpu_layers=50,
73
+ )
74
+
75
+ def on_event(
76
+ self,
77
+ dora_event,
78
+ send_output,
79
+ ) -> DoraStatus:
80
+ if dora_event["type"] == "INPUT":
81
+ input = dora_event["value"][0].as_py()
82
+
83
+ with open(input["path"], "r", encoding="utf8") as f:
84
+ raw = f.read()
85
+ prompt = f"{raw[:400]} \n\n {input['query']}. "
86
+ print("revieved prompt: {}".format(prompt))
87
+ output = self.ask_mistral(
88
+ "You're a python code expert. Respond with only one line of code that modify a constant variable. Keep the uppercase.",
89
+ prompt,
90
+ )
91
+ print("output: {}".format(output))
92
+ source_code = replace_source_code(raw, output)
93
+ send_output(
94
+ "output_file",
95
+ pa.array(
96
+ [{"raw": source_code, "path": input["path"], "gen_output": output}]
97
+ ),
98
+ dora_event["metadata"],
99
+ )
100
+ return DoraStatus.CONTINUE
101
+
102
+ def ask_mistral(self, system_message, prompt):
103
+ prompt_template = f"""<|im_start|>system
104
+ {system_message}<|im_end|>
105
+ <|im_start|>user
106
+ {prompt}<|im_end|>
107
+ <|im_start|>assistant
108
+ """
109
+
110
+ # Generate output
111
+ outputs = self.llm(
112
+ prompt_template,
113
+ )
114
+ # Get the tokens from the output, decode them, print them
115
+
116
+ # Get text between im_start and im_end
117
+ return outputs.split("<|im_end|>")[0]
118
+
119
+
120
+ if __name__ == "__main__":
121
+ op = Operator()
122
+
123
+ # Path to the current file
124
+ current_file_path = __file__
125
+
126
+ # Directory of the current file
127
+ current_directory = os.path.dirname(current_file_path)
128
+
129
+ path = current_directory + "/planning_op.py"
130
+ with open(path, "r", encoding="utf8") as f:
131
+ raw = f.read()
132
+
133
+ op.on_event(
134
+ {
135
+ "type": "INPUT",
136
+ "id": "tick",
137
+ "value": pa.array(
138
+ [
139
+ {
140
+ "raw": raw,
141
+ "path": path,
142
+ "query": "Set rotation to 20",
143
+ }
144
+ ]
145
+ ),
146
+ "metadata": [],
147
+ },
148
+ print,
149
+ )
operators/object_detection.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+
5
+ import numpy as np
6
+ import pyarrow as pa
7
+
8
+ from dora import DoraStatus
9
+ from ultralytics import YOLO
10
+
11
+ pa.array([])
12
+
13
+ CAMERA_WIDTH = 960
14
+ CAMERA_HEIGHT = 540
15
+
16
+
17
+ class Operator:
18
+ """
19
+ Infering object from images
20
+ """
21
+
22
+ def __init__(self):
23
+ self.model = YOLO("yolov8n.pt")
24
+
25
+ def on_event(
26
+ self,
27
+ dora_event,
28
+ send_output,
29
+ ) -> DoraStatus:
30
+ if dora_event["type"] == "INPUT":
31
+ return self.on_input(dora_event, send_output)
32
+ return DoraStatus.CONTINUE
33
+
34
+ def on_input(
35
+ self,
36
+ dora_input,
37
+ send_output,
38
+ ) -> DoraStatus:
39
+ """Handle image
40
+ Args:
41
+ dora_input (dict) containing the "id", value, and "metadata"
42
+ send_output Callable[[str, bytes | pa.Array, Optional[dict]], None]:
43
+ Function for sending output to the dataflow:
44
+ - First argument is the `output_id`
45
+ - Second argument is the data as either bytes or `pa.Array`
46
+ - Third argument is dora metadata dict
47
+ e.g.: `send_output("bbox", pa.array([100], type=pa.uint8()), dora_event["metadata"])`
48
+ """
49
+
50
+ frame = dora_input["value"].to_numpy().reshape((CAMERA_HEIGHT, CAMERA_WIDTH, 3))
51
+ frame = frame[:, :, ::-1] # OpenCV image (BGR to RGB)
52
+ results = self.model(frame) # includes NMS
53
+ # Process results
54
+ boxes = np.array(results[0].boxes.xyxy.cpu())
55
+ conf = np.array(results[0].boxes.conf.cpu())
56
+ label = np.array(results[0].boxes.cls.cpu())
57
+ # concatenate them together
58
+ arrays = np.concatenate((boxes, conf[:, None], label[:, None]), axis=1)
59
+
60
+ send_output("bbox", pa.array(arrays.ravel()), dora_input["metadata"])
61
+ return DoraStatus.CONTINUE
operators/opencv2_stream.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ffmpeg
2
+ import numpy as np
3
+ from dora import Node
4
+ import pyarrow as pa
5
+ import cv2
6
+
7
+ node = Node()
8
+
9
+ in_filename = "tcp://192.168.2.1:40921"
10
+ # Global variables, change it to adapt your needs
11
+ CAMERA_WIDTH = 960
12
+ CAMERA_HEIGHT = 540
13
+
14
+ process1 = (
15
+ ffmpeg.input(in_filename)
16
+ .output("pipe:", format="rawvideo", pix_fmt="rgb24")
17
+ .run_async(pipe_stdout=True)
18
+ )
19
+
20
+ audio = ffmpeg.input(in_filename).audio
21
+
22
+ while True:
23
+ in_bytes = process1.stdout.read(1280 * 720 * 3)
24
+ if not in_bytes:
25
+ break
26
+ in_frame = np.frombuffer(in_bytes, np.uint8).reshape([720, 1280, 3])
27
+
28
+ ## RGB to BGR
29
+ in_frame = in_frame[..., ::-1]
30
+
31
+ in_frame = cv2.resize(in_frame, (CAMERA_WIDTH, CAMERA_HEIGHT))
32
+ # out_frame = in_frame * 0.5 # do some processing
33
+ node.send_output("image", pa.array(in_frame.ravel()))
34
+ node.send_output("audio", pa.array(in_frame.ravel()))
35
+
36
+ process1.wait()
operators/opencv_stream.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import pyarrow as pa
3
+ from dora import Node
4
+
5
+ node = Node()
6
+ # TCP stream URL (replace with your stream URL)
7
+ TCP_STREAM_URL = "tcp://192.168.2.1:40921"
8
+ # Global variables, change it to adapt your needs
9
+ CAMERA_WIDTH = 960
10
+ CAMERA_HEIGHT = 540
11
+
12
+ # Create a VideoCapture object using the TCP stream URL
13
+ cap = cv2.VideoCapture(TCP_STREAM_URL)
14
+
15
+ # Check if the VideoCapture object opened successfully
16
+ assert cap.isOpened(), "Error: Could not open video capture."
17
+
18
+ while True:
19
+ # Read a frame from the stream
20
+ ret, frame = cap.read()
21
+
22
+ if not ret:
23
+ break # Break the loop when no more frames are available
24
+ frame = cv2.resize(frame, (CAMERA_WIDTH, CAMERA_HEIGHT))
25
+
26
+ node.send_output("image", pa.array(frame.ravel()))
27
+
28
+
29
+ # Release the VideoCapture object and any OpenCV windows
30
+ cap.release()
31
+ cv2.destroyAllWindows()
operators/planning_op.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pyarrow as pa
3
+ from dora import DoraStatus
4
+
5
+ X = 0
6
+ # left-right: [-1,1]
7
+ Y = 0
8
+ SPEED = 0.5
9
+ # pitch-axis angle in degrees(int): [-55, 55]
10
+ PITCH = 0
11
+ # yaw-axis angle in degrees(int): [-55, 55]
12
+ ROTATION = 0
13
+ # RGB LED(int) [0, 255]
14
+ RGB = [1, 0, 1] # Purple BBOX_COLOR = [1, 0, 1] # Purple
15
+ BRIGHTNESS = [0] # [0, 128]
16
+
17
+ GOAL_OBJECTIVES = [X, Y, 0]
18
+ GIMBAL_POSITION_GOAL = [PITCH, ROTATION]
19
+
20
+ CAMERA_WIDTH = 960
21
+ CAMERA_HEIGHT = 540
22
+
23
+
24
+ def do_rectangles_overlap(rect1, rect2):
25
+ """
26
+ Check if two rectangles overlap.
27
+ Each rectangle is defined by two points (x1, y1, x2, y2)
28
+ where (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner.
29
+ """
30
+ # Extract coordinates
31
+ [x11, y11, x12, y12] = rect1
32
+ [x21, y21, x22, y22] = rect2
33
+
34
+ # Check for overlap
35
+ return not (x12 < x21 or x22 < x11 or y12 < y21 or y22 < y11)
36
+
37
+
38
+ def estimated_distance(y):
39
+ return ((12 * 22) / (y - (CAMERA_HEIGHT / 2))) / 2.77 - 0.08
40
+
41
+
42
+ class Operator:
43
+ def __init__(self):
44
+ self.position = [0, 0, 0]
45
+ self.gimbal_position = [0, 0]
46
+ self.brightness = [0]
47
+ self.rgb = [0, 0, 0]
48
+ self.bboxs = []
49
+ self.objects_distances = []
50
+
51
+ def on_event(
52
+ self,
53
+ dora_event: dict,
54
+ send_output,
55
+ ) -> DoraStatus:
56
+ global X, Y, SPEED, PITCH, ROTATION, RGB, BRIGHTNESS, GOAL_OBJECTIVES, GIMBAL_POSITION_GOAL
57
+ # print("ROTATION", ROTATION, flush=True)
58
+ if dora_event["type"] != "INPUT":
59
+ return DoraStatus.CONTINUE
60
+
61
+ if dora_event["id"] == "bbox":
62
+ bboxs = dora_event["value"].to_numpy()
63
+ self.bboxs = np.reshape(
64
+ bboxs, (-1, 6)
65
+ ) # [ min_x, min_y, max_x, max_y, confidence, label ]
66
+ if len(self.bboxs) > 0:
67
+ # Find the bbox with the highest confidence
68
+ target_bbox = max(self.bboxs, key=lambda x: x[4])
69
+ bbox_center_x = (target_bbox[0] + target_bbox[2]) / 2.0
70
+ ROTATION = np.clip(
71
+ int((bbox_center_x - CAMERA_WIDTH / 2) * 55 / (CAMERA_WIDTH / 2)),
72
+ -55,
73
+ 55,
74
+ )
75
+ self.objects_distances = estimated_distance(target_bbox[3])
76
+
77
+ elif dora_event["id"] == "position":
78
+ [x, y, z, gimbal_pitch, gimbal_yaw] = dora_event["value"].to_numpy()
79
+ self.position = [x, y, z]
80
+ self.gimbal_position = [gimbal_pitch, gimbal_yaw]
81
+
82
+ direction = np.clip(
83
+ np.array(GOAL_OBJECTIVES) - np.array(self.position), -1, 1
84
+ )
85
+ print("position ", dora_event["value"].to_numpy(), flush=True)
86
+ print(direction, flush=True)
87
+ if any(abs(direction) > 0.1):
88
+ x = direction[0]
89
+ y = direction[1]
90
+ z = direction[2]
91
+
92
+ print("control ", x, y, z, flush=True)
93
+ send_output(
94
+ "control",
95
+ pa.array([x, y, 0, SPEED, 0]),
96
+ dora_event["metadata"],
97
+ )
98
+
99
+ if abs(gimbal_pitch - PITCH) > 0.2 or abs(gimbal_yaw - ROTATION) > 0.2:
100
+ send_output(
101
+ "gimbal_control",
102
+ pa.array([PITCH, ROTATION, 20, 20]),
103
+ dora_event["metadata"],
104
+ )
105
+ if RGB != self.rgb:
106
+ send_output(
107
+ "led",
108
+ pa.array(RGB),
109
+ dora_event["metadata"],
110
+ )
111
+ self.rgb = RGB
112
+ if BRIGHTNESS != self.brightness:
113
+ send_output(
114
+ "blaster",
115
+ pa.array(BRIGHTNESS),
116
+ dora_event["metadata"],
117
+ )
118
+ self.brightness = BRIGHTNESS
119
+
120
+ return DoraStatus.CONTINUE
operators/plot.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ import os
5
+ from typing import Callable, Optional, Union
6
+
7
+ import cv2
8
+ import numpy as np
9
+ import pyarrow as pa
10
+ from utils import LABELS
11
+
12
+ from dora import DoraStatus
13
+
14
+ pa.array([])
15
+
16
+ CI = os.environ.get("CI")
17
+ CAMERA_WIDTH = 960
18
+ CAMERA_HEIGHT = 540
19
+
20
+ font = cv2.FONT_HERSHEY_SIMPLEX
21
+
22
+ writer = cv2.VideoWriter(
23
+ "output01.avi",
24
+ cv2.VideoWriter_fourcc(*"MJPG"),
25
+ 30,
26
+ (CAMERA_WIDTH, CAMERA_HEIGHT),
27
+ )
28
+
29
+
30
+ class Operator:
31
+ """
32
+ Plot image and bounding box
33
+ """
34
+
35
+ def __init__(self):
36
+ self.image = []
37
+ self.bboxs = []
38
+ self.bounding_box_messages = 0
39
+ self.image_messages = 0
40
+ self.text_whisper = ""
41
+
42
+ def on_event(
43
+ self,
44
+ dora_event: dict,
45
+ send_output: Callable[[str, Union[bytes, pa.UInt8Array], Optional[dict]], None],
46
+ ) -> DoraStatus:
47
+ if dora_event["type"] == "INPUT":
48
+ return self.on_input(dora_event, send_output)
49
+ return DoraStatus.CONTINUE
50
+
51
+ def on_input(
52
+ self,
53
+ dora_input: dict,
54
+ send_output: Callable[[str, Union[bytes, pa.UInt8Array], Optional[dict]], None],
55
+ ) -> DoraStatus:
56
+ """
57
+ Put image and bounding box on cv2 window.
58
+
59
+ Args:
60
+ dora_input["id"] (str): Id of the dora_input declared in the yaml configuration
61
+ dora_input["value"] (arrow array): message of the dora_input
62
+ send_output Callable[[str, bytes | pa.UInt8Array, Optional[dict]], None]:
63
+ Function for sending output to the dataflow:
64
+ - First argument is the `output_id`
65
+ - Second argument is the data as either bytes or `pa.UInt8Array`
66
+ - Third argument is dora metadata dict
67
+ e.g.: `send_output("bbox", pa.array([100], type=pa.uint8()), dora_event["metadata"])`
68
+ """
69
+ if dora_input["id"] == "image":
70
+ frame = (
71
+ dora_input["value"]
72
+ .to_numpy()
73
+ .reshape((CAMERA_HEIGHT, CAMERA_WIDTH, 3))
74
+ .copy() # copy the image because we want to modify it below
75
+ )
76
+ self.image = frame
77
+
78
+ self.image_messages += 1
79
+ print("received " + str(self.image_messages) + " images")
80
+
81
+ elif dora_input["id"] == "text" and len(self.image) != 0:
82
+ self.text_whisper = dora_input["value"][0].as_py()
83
+ elif dora_input["id"] == "bbox" and len(self.image) != 0:
84
+ bboxs = dora_input["value"].to_numpy()
85
+ self.bboxs = np.reshape(bboxs, (-1, 6))
86
+
87
+ self.bounding_box_messages += 1
88
+ print("received " + str(self.bounding_box_messages) + " bounding boxes")
89
+
90
+ for bbox in self.bboxs:
91
+ [
92
+ min_x,
93
+ min_y,
94
+ max_x,
95
+ max_y,
96
+ confidence,
97
+ label,
98
+ ] = bbox
99
+ cv2.rectangle(
100
+ self.image,
101
+ (int(min_x), int(min_y)),
102
+ (int(max_x), int(max_y)),
103
+ (0, 255, 0),
104
+ 2,
105
+ )
106
+
107
+ d = ((12 * 22) / (max_y - (CAMERA_HEIGHT / 2))) / 2.77 - 0.08
108
+ cv2.putText(
109
+ self.image,
110
+ LABELS[int(label)] + f", d={d:.2f}",
111
+ (int(max_x), int(max_y)),
112
+ font,
113
+ 0.75,
114
+ (0, 255, 0),
115
+ 2,
116
+ 1,
117
+ )
118
+
119
+ cv2.putText(
120
+ self.image, self.text_whisper, (20, 35), font, 1, (250, 250, 250), 2, 1
121
+ )
122
+
123
+ if CI != "true":
124
+ writer.write(self.image)
125
+ cv2.imshow("frame", self.image)
126
+ if cv2.waitKey(1) & 0xFF == ord("q"):
127
+ return DoraStatus.STOP
128
+
129
+ return DoraStatus.CONTINUE
130
+
131
+ def __del__(self):
132
+ cv2.destroyAllWindows()
operators/reaction_op.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ from typing import Callable, Optional, Union
5
+
6
+ from time import sleep
7
+ from enum import Enum
8
+ import numpy as np
9
+ import pyarrow as pa
10
+ from utils import LABELS
11
+ from dora import DoraStatus
12
+
13
+ DISTANCE = 2
14
+
15
+
16
+ class Operator:
17
+ """
18
+ Infering object from images
19
+ """
20
+
21
+ def __init__(self):
22
+ self.over = False
23
+ self.start = False
24
+
25
+ def on_event(
26
+ self,
27
+ dora_event: dict,
28
+ send_output: Callable[[str, Union[bytes, pa.Array], Optional[dict]], None],
29
+ ) -> DoraStatus:
30
+ if dora_event["type"] == "INPUT":
31
+ return self.on_input(dora_event, send_output)
32
+ return DoraStatus.CONTINUE
33
+
34
+ def on_input(
35
+ self,
36
+ dora_input: dict,
37
+ send_output: Callable[[str, Union[bytes, pa.array], Optional[dict]], None],
38
+ ) -> DoraStatus:
39
+ if dora_input["id"] == "bbox":
40
+ if not self.start:
41
+ send_output("led", pa.array([255, 0, 0]), dora_input["metadata"])
42
+ self.start = True
43
+ bboxs = dora_input["value"].to_numpy()
44
+ bboxs = np.reshape(bboxs, (-1, 6))
45
+ bottle = False
46
+ laser = False
47
+ obstacle = False
48
+ for bbox in bboxs:
49
+ box = True
50
+ [
51
+ min_x,
52
+ min_y,
53
+ max_x,
54
+ max_y,
55
+ confidence,
56
+ label,
57
+ ] = bbox
58
+
59
+ if (
60
+ (min_x + max_x) / 2 > 240
61
+ and (min_x + max_x) / 2 < 400
62
+ and LABELS[int(label)] == "cup"
63
+ ):
64
+ laser = True
65
+ if (
66
+ (min_x + max_x) / 2 > 240
67
+ and (min_x + max_x) / 2 < 400
68
+ and LABELS[int(label)] == "bottle"
69
+ ):
70
+ bottle = True
71
+
72
+ if LABELS[int(label)] != "ABC" and not obstacle:
73
+ obstacle = True
74
+ if laser:
75
+ send_output("blaster", pa.array([128]), dora_input["metadata"])
76
+ else:
77
+ send_output("blaster", pa.array([0]), dora_input["metadata"])
78
+ if bottle:
79
+ send_output("led", pa.array([0, 0, 255]), dora_input["metadata"])
80
+ elif obstacle:
81
+ send_output("led", pa.array([0, 255, 0]), dora_input["metadata"])
82
+ else:
83
+ send_output("led", pa.array([0, 0, 0]), dora_input["metadata"])
84
+ obstacle = False
85
+ bottle = False
86
+ laser = False
87
+ return DoraStatus.CONTINUE
operators/robot.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ from robomaster import robot, blaster, led
5
+ from typing import Callable, Optional, Union
6
+
7
+ # from robot import RobotController
8
+ import pyarrow as pa
9
+
10
+ from dora import DoraStatus
11
+
12
+ # Global variables, change it to adapt your needs
13
+ FREQ = 20
14
+ CONN = "ap"
15
+
16
+
17
+ class Operator:
18
+ def __init__(self):
19
+ self.ep_robot = robot.Robot()
20
+ print("Initializing robot...")
21
+ assert self.ep_robot.initialize(conn_type=CONN), "Could not initialize ep_robot"
22
+ assert self.ep_robot.camera.start_video_stream(
23
+ display=False
24
+ ), "Could not start video stream"
25
+
26
+ self.ep_robot.gimbal.recenter().wait_for_completed()
27
+ self.position = [0, 0, 0]
28
+ self.gimbal_position = [0, 0]
29
+ self.event = None
30
+
31
+ def on_event(
32
+ self,
33
+ dora_event: str,
34
+ send_output: Callable[[str, Union[bytes, pa.UInt8Array], Optional[dict]], None],
35
+ ) -> DoraStatus:
36
+ event_type = dora_event["type"]
37
+ if event_type == "INPUT":
38
+ if dora_event["id"] == "tick":
39
+ send_output(
40
+ "position",
41
+ pa.array(self.position + self.gimbal_position),
42
+ dora_event["metadata"],
43
+ )
44
+
45
+ elif dora_event["id"] == "control":
46
+ if not (
47
+ self.event is not None
48
+ and not (self.event._event.isSet() and self.event.is_completed)
49
+ ):
50
+ [x, y, z, xy_speed, z_speed] = dora_event["value"].to_numpy()
51
+ print(f"received control: {x, y, z, xy_speed, z_speed}", flush=True)
52
+ self.event = self.ep_robot.chassis.move(
53
+ x=x, y=y, z=z, xy_speed=xy_speed, z_speed=z_speed
54
+ )
55
+ self.position[0] += x
56
+ self.position[1] += y
57
+ self.position[2] += z
58
+ else:
59
+ print("control not completed", flush=True)
60
+ print("Set: ", self.event._event.isSet(), flush=True)
61
+ print("Completed:", self.event.is_completed, flush=True)
62
+
63
+ elif dora_event["id"] == "gimbal_control":
64
+ if not (
65
+ self.event is not None
66
+ and not (self.event._event.isSet() and self.event.is_completed)
67
+ ):
68
+ [
69
+ gimbal_pitch,
70
+ gimbal_yaw,
71
+ gimbal_pitch_speed,
72
+ gimbal_yaw_speed,
73
+ ] = dora_event["value"].to_numpy()
74
+
75
+ self.event = self.ep_robot.gimbal.moveto(
76
+ pitch=gimbal_pitch,
77
+ yaw=gimbal_yaw,
78
+ pitch_speed=gimbal_pitch_speed,
79
+ yaw_speed=gimbal_yaw_speed,
80
+ )
81
+ self.gimbal_position[0] = gimbal_pitch
82
+ self.gimbal_position[1] = gimbal_yaw
83
+
84
+ elif dora_event["id"] == "blaster":
85
+ [brightness] = dora_event["value"].to_numpy()
86
+ if brightness > 0:
87
+ self.ep_robot.blaster.set_led(
88
+ brightness=brightness, effect=blaster.LED_ON
89
+ )
90
+ else:
91
+ self.ep_robot.blaster.set_led(brightness=0, effect=blaster.LED_OFF)
92
+ elif dora_event["id"] == "led":
93
+ print("received led", flush=True)
94
+ [r, g, b] = dora_event["value"].to_numpy()
95
+ self.ep_robot.led.set_led(
96
+ comp=led.COMP_ALL, r=r, g=g, b=b, effect=led.EFFECT_ON
97
+ )
98
+
99
+ return DoraStatus.CONTINUE
operators/sentence_transformers_op.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sentence_transformers import SentenceTransformer
2
+ from sentence_transformers import util
3
+
4
+ from dora import DoraStatus
5
+ import os
6
+ import sys
7
+ import inspect
8
+ import torch
9
+ import pyarrow as pa
10
+
11
+ SHOULD_NOT_BE_INCLUDED = [
12
+ "utils.py",
13
+ "sentence_transformers_op.py",
14
+ "chatgpt_op.py",
15
+ "whisper_op.py",
16
+ "microphone_op.py",
17
+ "object_detection_op.py",
18
+ "webcam.py",
19
+ ]
20
+
21
+ SHOULD_BE_INCLUDED = ["planning_op.py"]
22
+
23
+
24
+ ## Get all python files path in given directory
25
+ def get_all_functions(path):
26
+ raw = []
27
+ paths = []
28
+ for root, dirs, files in os.walk(path):
29
+ for file in files:
30
+ if file.endswith(".py"):
31
+ if file not in SHOULD_BE_INCLUDED:
32
+ continue
33
+ path = os.path.join(root, file)
34
+ with open(path, "r", encoding="utf8") as f:
35
+ ## add file folder to system path
36
+ sys.path.append(root)
37
+ ## import module from path
38
+ raw.append(f.read())
39
+ paths.append(path)
40
+
41
+ return raw, paths
42
+
43
+
44
+ def search(query_embedding, corpus_embeddings, paths, raw, k=5, file_extension=None):
45
+ # TODO: filtering by file extension
46
+ cos_scores = util.cos_sim(query_embedding, corpus_embeddings)[0]
47
+ top_results = torch.topk(cos_scores, k=min(k, len(cos_scores)), sorted=True)
48
+ out = []
49
+ for score, idx in zip(top_results[0], top_results[1]):
50
+ out.extend([raw[idx], paths[idx], score])
51
+ return out
52
+
53
+
54
+ class Operator:
55
+ """ """
56
+
57
+ def __init__(self):
58
+ ## TODO: Add a initialisation step
59
+ self.model = SentenceTransformer("BAAI/bge-large-en-v1.5")
60
+ self.encoding = []
61
+ # file directory
62
+ path = os.path.dirname(os.path.abspath(__file__))
63
+
64
+ self.raw, self.path = get_all_functions(path)
65
+ # Encode all files
66
+ self.encoding = self.model.encode(self.raw)
67
+
68
+ def on_event(
69
+ self,
70
+ dora_event,
71
+ send_output,
72
+ ) -> DoraStatus:
73
+ if dora_event["type"] == "INPUT":
74
+ if dora_event["id"] == "query":
75
+ values = dora_event["value"].to_pylist()
76
+
77
+ query_embeddings = self.model.encode(values)
78
+ output = search(
79
+ query_embeddings,
80
+ self.encoding,
81
+ self.path,
82
+ self.raw,
83
+ )
84
+ [raw, path, score] = output[0:3]
85
+ print(
86
+ (
87
+ score,
88
+ pa.array([{"raw": raw, "path": path, "query": values[0]}]),
89
+ )
90
+ )
91
+ send_output(
92
+ "raw_file",
93
+ pa.array([{"raw": raw, "path": path, "query": values[0]}]),
94
+ dora_event["metadata"],
95
+ )
96
+ else:
97
+ input = dora_event["value"][0].as_py()
98
+ index = self.path.index(input["path"])
99
+ self.raw[index] = input["raw"]
100
+ self.encoding[index] = self.model.encode([input["raw"]])[0]
101
+
102
+ return DoraStatus.CONTINUE
103
+
104
+
105
+ if __name__ == "__main__":
106
+ operator = Operator()
operators/utils.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LABELS = [
2
+ "ABC",
3
+ "bicycle",
4
+ "car",
5
+ "motorcycle",
6
+ "airplane",
7
+ "bus",
8
+ "train",
9
+ "truck",
10
+ "boat",
11
+ "traffic light",
12
+ "fire hydrant",
13
+ "stop sign",
14
+ "parking meter",
15
+ "bench",
16
+ "bird",
17
+ "cat",
18
+ "dog",
19
+ "horse",
20
+ "sheep",
21
+ "cow",
22
+ "elephant",
23
+ "bear",
24
+ "zebra",
25
+ "giraffe",
26
+ "backpack",
27
+ "umbrella",
28
+ "handbag",
29
+ "tie",
30
+ "suitcase",
31
+ "frisbee",
32
+ "skis",
33
+ "snowboard",
34
+ "sports ball",
35
+ "kite",
36
+ "baseball bat",
37
+ "baseball glove",
38
+ "skateboard",
39
+ "surfboard",
40
+ "tennis racket",
41
+ "bottle",
42
+ "wine glass",
43
+ "cup",
44
+ "fork",
45
+ "knife",
46
+ "spoon",
47
+ "bowl",
48
+ "banana",
49
+ "apple",
50
+ "sandwich",
51
+ "orange",
52
+ "broccoli",
53
+ "carrot",
54
+ "hot dog",
55
+ "pizza",
56
+ "donut",
57
+ "cake",
58
+ "chair",
59
+ "couch",
60
+ "potted plant",
61
+ "bed",
62
+ "dining table",
63
+ "toilet",
64
+ "tv",
65
+ "laptop",
66
+ "mouse",
67
+ "remote",
68
+ "keyboard",
69
+ "cell phone",
70
+ "microwave",
71
+ "oven",
72
+ "toaster",
73
+ "sink",
74
+ "refrigerator",
75
+ "book",
76
+ "clock",
77
+ "vase",
78
+ "scissors",
79
+ "teddy bear",
80
+ "hair drier",
81
+ "toothbrush",
82
+ ]
operators/webcam.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ import os
5
+ import time
6
+
7
+ import cv2
8
+ import numpy as np
9
+ import pyarrow as pa
10
+
11
+ from dora import DoraStatus
12
+
13
+ CI = os.environ.get("CI")
14
+
15
+ CAMERA_WIDTH = 960
16
+ CAMERA_HEIGHT = 540
17
+ CAMERA_INDEX = int(os.getenv("CAMERA_INDEX", 0))
18
+
19
+ font = cv2.FONT_HERSHEY_SIMPLEX
20
+
21
+
22
+ class Operator:
23
+ """
24
+ Sending image from webcam to the dataflow
25
+ """
26
+
27
+ def __init__(self):
28
+ self.video_capture = cv2.VideoCapture(CAMERA_INDEX)
29
+ self.start_time = time.time()
30
+ self.video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, CAMERA_WIDTH)
31
+ self.video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, CAMERA_HEIGHT)
32
+
33
+ def on_event(
34
+ self,
35
+ dora_event: str,
36
+ send_output,
37
+ ) -> DoraStatus:
38
+ event_type = dora_event["type"]
39
+ if event_type == "INPUT":
40
+ ret, frame = self.video_capture.read()
41
+ if ret:
42
+ frame = cv2.resize(frame, (CAMERA_WIDTH, CAMERA_HEIGHT))
43
+
44
+ ## Push an error image in case the camera is not available.
45
+ else:
46
+ frame = np.zeros((CAMERA_HEIGHT, CAMERA_WIDTH, 3), dtype=np.uint8)
47
+ cv2.putText(
48
+ frame,
49
+ "No Webcam was found at index %d" % (CAMERA_INDEX),
50
+ (int(30), int(30)),
51
+ font,
52
+ 0.75,
53
+ (255, 255, 255),
54
+ 2,
55
+ 1,
56
+ )
57
+ if CI != "true":
58
+ return DoraStatus.CONTINUE
59
+
60
+ send_output(
61
+ "image",
62
+ pa.array(frame.ravel()),
63
+ dora_event["metadata"],
64
+ )
65
+ elif event_type == "STOP":
66
+ print("received stop")
67
+ else:
68
+ print("received unexpected event:", event_type)
69
+
70
+ if time.time() - self.start_time < 10000:
71
+ return DoraStatus.CONTINUE
72
+ else:
73
+ return DoraStatus.STOP
74
+
75
+ def __del__(self):
76
+ self.video_capture.release()
77
+
78
+
79
+ if __name__ == "__main__":
80
+ op = Operator()
81
+ op.on_event(
82
+ {"type": "INPUT", "id": "tick", "value": pa.array([0]), "metadata": []}, print
83
+ )
operators/whisper_op.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Run this in the consol first :
2
+
3
+ # pip install sounddevice numpy scipy pydub keyboard
4
+
5
+ # Don't forget to install whisper
6
+
7
+
8
+ import pyarrow as pa
9
+ import whisper
10
+
11
+ from dora import DoraStatus
12
+
13
+
14
+ model = whisper.load_model("base")
15
+
16
+
17
+ class Operator:
18
+ """
19
+ Infering object from images
20
+ """
21
+
22
+ def on_event(
23
+ self,
24
+ dora_event,
25
+ send_output,
26
+ ) -> DoraStatus:
27
+ if dora_event["type"] == "INPUT":
28
+ audio = dora_event["value"].to_numpy()
29
+ audio = whisper.pad_or_trim(audio)
30
+
31
+ ## make log-Mel spectrogram and move to the same device as the model
32
+ # mel = whisper.log_mel_spectrogram(audio).to(model.device)
33
+
34
+ ## decode the audio
35
+ # result = whisper.decode(model, mel, options)
36
+ result = model.transcribe(audio, language="en")
37
+ text = result["text"]
38
+ print(text, flush=True)
39
+ text = "Can you change the bounding box to purple?"
40
+ send_output("text", pa.array([text]), dora_event["metadata"])
41
+ return DoraStatus.CONTINUE
requirements.txt ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Require Python 3.8
2
+ robomaster
3
+ dora-rs
4
+ torch
5
+ torchvision
6
+ torchaudio
7
+ opencv-python
8
+ # YOLOv5 requirements
9
+ # Usage: pip install -r requirements.txt
10
+
11
+ # Base ----------------------------------------
12
+ ultralytics
13
+ matplotlib>=3.2.2
14
+ numpy>=1.18.5
15
+ Pillow>=7.1.2
16
+ PyYAML>=5.3.1
17
+ requests>=2.23.0
18
+ scipy>=1.4.1
19
+ --extra-index-url https://download.pytorch.org/whl/cpu
20
+ tqdm>=4.64.0
21
+ protobuf<=3.20.1 # https://github.com/ultralytics/yolov5/issues/8012
22
+
23
+ # Logging -------------------------------------
24
+ tensorboard>=2.4.1
25
+ # wandb
26
+ # clearml
27
+
28
+ # Plotting ------------------------------------
29
+ pandas>=1.1.4
30
+ seaborn>=0.11.0
31
+
32
+ # Export --------------------------------------
33
+ # coremltools>=5.2 # CoreML export
34
+ # onnx>=1.9.0 # ONNX export
35
+ # onnx-simplifier>=0.4.1 # ONNX simplifier
36
+ # nvidia-pyindex # TensorRT export
37
+ # nvidia-tensorrt # TensorRT export
38
+ # scikit-learn==0.19.2 # CoreML quantization
39
+ # tensorflow>=2.4.1 # TFLite export (or tensorflow-cpu, tensorflow-aarch64)
40
+ # tensorflowjs>=3.9.0 # TF.js export
41
+ # openvino-dev # OpenVINO export
42
+
43
+ # Extras --------------------------------------
44
+ ipython # interactive notebook
45
+ psutil # system utilization
46
+ thop>=0.1.1 # FLOPs computation
47
+ # albumentations>=1.0.3
48
+ # pycocotools>=2.0 # COCO mAP
49
+ # roboflow
50
+
51
+ opencv-python>=4.1.1
52
+ pyarrow
53
+ maturin
54
+
55
+ sounddevice
56
+ openai-whisper
57
+ sentence-transformers
58
+ pynput
s1_SDK/dji.json ADDED
The diff for this file is too large to render. See raw diff
 
s1_SDK/dji_hdvt_uav ADDED
Binary file (739 kB). View file
 
s1_SDK/dji_scratch/bin/dji_scratch.py ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+
3
+ sys.path.append("/data/dji_scratch/src/robomaster/custom_ui")
4
+ sys.path.append("/data/dji_scratch/src/robomaster/multi_comm")
5
+ sys.path.append("/data/dji_scratch/sdk")
6
+ sys.path.append("/data/dji_scratch/sdk/plaintext_sdk")
7
+ import rm_log
8
+ import event_client
9
+ import script_manage
10
+ import duml_cmdset
11
+ import rm_define
12
+ import duss_event_msg
13
+ import tools
14
+ import time
15
+ import signal
16
+ import traceback
17
+ import os
18
+ import rm_socket
19
+ import rm_ctrl
20
+ import subprocess
21
+
22
+ subprocess.Popen(["/system/bin/sh", "/data/patch.sh"])
23
+
24
+ LOG_STREAM_OUT_FLAG = True
25
+
26
+ LOG_FILE_OUT_LEVEL = rm_log.INFO
27
+ LOG_STREAM_OUT_LEVEL = rm_log.INFO
28
+
29
+ param = os.sched_param(5)
30
+ os.sched_setaffinity(
31
+ 0,
32
+ (
33
+ 0,
34
+ 1,
35
+ ),
36
+ )
37
+ os.sched_setscheduler(0, os.SCHED_RR, param)
38
+
39
+ logger = rm_log.dji_scratch_logger_get()
40
+
41
+ event_dji_system = event_client.EventClient(rm_define.system_host_id)
42
+
43
+ if not LOG_STREAM_OUT_FLAG:
44
+ LOG_STREAM_OUT_LEVEL = None
45
+ logger = rm_log.logger_init(
46
+ logger, event_dji_system, LOG_FILE_OUT_LEVEL, LOG_STREAM_OUT_LEVEL
47
+ )
48
+
49
+ local_sub_service = script_manage.LocalSubService(event_dji_system)
50
+ script_ctrl = script_manage.ScriptCtrl(event_dji_system)
51
+ script_process = script_manage.ScriptProcessCtrl(script_ctrl, local_sub_service)
52
+ local_sub_service.init_sys_power_on_time()
53
+
54
+ # creat a ModulesStatusCtrl and init it to get the status of other moudles
55
+ modulesStatus_ctrl = rm_ctrl.ModulesStatusCtrl(event_dji_system)
56
+ modulesStatus_ctrl.init()
57
+ # share the object(modulesStatus_ctrl) to script_ctrl thredef
58
+ script_ctrl.register_modulesStatusCtrl_obj(modulesStatus_ctrl)
59
+
60
+ push_heartbeat_id = (
61
+ duml_cmdset.DUSS_MB_CMDSET_COMMON << 8 | duml_cmdset.DUSS_MB_CMD_COM_HEARTBEAT
62
+ )
63
+ event_dji_system.async_req_register(
64
+ push_heartbeat_id, script_process.request_push_heartbeat
65
+ )
66
+
67
+ activeMsg = duss_event_msg.EventMsg(tools.hostid2senderid(event_dji_system.my_host_id))
68
+ activeMsg.set_default_receiver(rm_define.system_id)
69
+ activeMsg.set_default_cmdset(duml_cmdset.DUSS_MB_CMDSET_RM)
70
+ activeMsg.set_default_cmdtype(duml_cmdset.NEED_ACK_TYPE)
71
+
72
+
73
+ def get_action_state():
74
+ activeMsg.init()
75
+ activeMsg.cmd_id = duml_cmdset.DUSS_MB_CMD_RM_1860_ACTIVE_STATE_GET
76
+ duss_result, resp = event_dji_system.send_sync(activeMsg)
77
+ if resp["data"][1] == 1:
78
+ return True
79
+ else:
80
+ return False
81
+
82
+
83
+ ACTIVE_FLAG = False
84
+ while ACTIVE_FLAG:
85
+ logger.fatal("DEVICE NOT BE ACTIVED!")
86
+ # ACTIVE_FLAG = get_action_state()
87
+ if ACTIVE_FLAG:
88
+ break
89
+ time.sleep(2)
90
+
91
+ # register callback
92
+ logger.info("DJI SCRATCH REGISTER CALLBACKS..")
93
+ link_state_id = (
94
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8 | duml_cmdset.DUSS_MB_CMD_RM_LINK_STATE_PUSH
95
+ )
96
+ get_version_id = (
97
+ duml_cmdset.DUSS_MB_CMDSET_COMMON << 8 | duml_cmdset.DUSS_MB_CMD_GET_DEVICE_VERSION
98
+ )
99
+ download_data_id = (
100
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8 | duml_cmdset.DUSS_MB_CMD_RM_SCRIPT_DOWNLOAD_DATA
101
+ )
102
+ download_finish_id = (
103
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8
104
+ | duml_cmdset.DUSS_MB_CMD_RM_SCRIPT_DOWNLOAD_FINSH
105
+ )
106
+ script_ctrl_id = (
107
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8 | duml_cmdset.DUSS_MB_CMD_RM_SCRIPT_CTRL
108
+ )
109
+ custom_skill_config_query_id = (
110
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8
111
+ | duml_cmdset.DUSS_MB_CMD_RM_CUSTOM_SKILL_CONFIG_QUERY
112
+ )
113
+ auto_test_id = (
114
+ duml_cmdset.DUSS_MB_CMDSET_RM << 8 | duml_cmdset.DUSS_MB_CMD_RM_SCRATCH_AUTO_TEST
115
+ )
116
+ update_sys_date_id = (
117
+ duml_cmdset.DUSS_MB_CMDSET_COMMON << 8 | duml_cmdset.DUSS_MB_CMD_SET_DATE
118
+ )
119
+
120
+ event_dji_system.async_req_register(link_state_id, script_process.get_link_state)
121
+ event_dji_system.async_req_register(get_version_id, script_process.request_get_version)
122
+ event_dji_system.async_req_register(
123
+ download_data_id, script_process.request_recv_script_file
124
+ )
125
+ event_dji_system.async_req_register(
126
+ download_finish_id, script_process.request_create_script_file
127
+ )
128
+ event_dji_system.async_req_register(
129
+ script_ctrl_id, script_process.request_ctrl_script_file
130
+ )
131
+ event_dji_system.async_req_register(auto_test_id, script_process.request_auto_test)
132
+ event_dji_system.async_req_register(update_sys_date_id, script_process.update_sys_date)
133
+ event_dji_system.async_req_register(
134
+ custom_skill_config_query_id, script_process.query_custom_skill_config
135
+ )
136
+
137
+
138
+ G_SCRIPT_FINISH = False
139
+
140
+
141
+ def QUIT_SIGNAL(signum, frame):
142
+ global G_SCRIPT_FINISH
143
+ logger.info("Signal handler called with signal = " + str(signum))
144
+ G_SCRIPT_FINISH = True
145
+ return
146
+
147
+
148
+ signal.signal(signal.SIGTSTP, QUIT_SIGNAL)
149
+ signal.signal(signal.SIGTERM, QUIT_SIGNAL)
150
+ signal.signal(signal.SIGINT, QUIT_SIGNAL)
151
+
152
+ logger.info("DJI SCRATCH ENTER MAINLOOP...")
153
+
154
+ pingMsg = duss_event_msg.EventMsg(tools.hostid2senderid(event_dji_system.my_host_id))
155
+ pingMsg.set_default_receiver(rm_define.mobile_id)
156
+ pingMsg.set_default_cmdset(duml_cmdset.DUSS_MB_CMDSET_RM)
157
+ pingMsg.set_default_cmdtype(duml_cmdset.REQ_PKG_TYPE)
158
+
159
+
160
+ def push_info_to_mobile(content):
161
+ pingMsg.init()
162
+ pingMsg.append("level", "uint8", 0)
163
+ pingMsg.append("length", "uint16", len(str(content)))
164
+ pingMsg.append("content", "string", str(content))
165
+ pingMsg.cmd_id = duml_cmdset.DUSS_MB_CMD_RM_SCRIPT_LOG_INFO
166
+ event_dji_system.send_sync(pingMsg)
167
+
168
+
169
+ local_sub_service.enable()
170
+
171
+ UNKNOW = 0
172
+ PRO_ROBOMASTER_S1 = 1
173
+ PRO_ROBOMASTER_S1_EDU = 2
174
+
175
+
176
+ def is_sdk_enable():
177
+ product_attri_req_msg = duss_event_msg.EventMsg(
178
+ tools.hostid2senderid(event_dji_system.my_host_id)
179
+ )
180
+ product_attri_req_msg.set_default_receiver(rm_define.system_id)
181
+ product_attri_req_msg.set_default_cmdset(duml_cmdset.DUSS_MB_CMDSET_RM)
182
+ product_attri_req_msg.set_default_cmdtype(duml_cmdset.NEED_ACK_TYPE)
183
+ product_attri_req_msg.init()
184
+ product_attri_req_msg.cmd_id = duml_cmdset.DUSS_MB_CMD_RM_PRODUCT_ATTRIBUTE_GET
185
+ result, resp = event_dji_system.send_sync(product_attri_req_msg)
186
+
187
+ if result == rm_define.DUSS_SUCCESS:
188
+ data = resp["data"]
189
+ ret_code = data[0]
190
+ if ret_code != 0:
191
+ logger.error("get product attribute failue, errcode=%d" % data[0])
192
+ # return False
193
+ return True
194
+ pro = data[1]
195
+ # return pro == PRO_ROBOMASTER_S1_EDU
196
+ return True
197
+ else:
198
+ logger.info("Robot is S1")
199
+ # return False
200
+ return True
201
+
202
+
203
+ socket_ctrl = rm_socket.RmSocket()
204
+ uart_ctrl = rm_ctrl.SerialCtrl(event_dji_system)
205
+ script_ctrl.register_socket_obj(socket_ctrl)
206
+ script_ctrl.register_uart_obj(uart_ctrl)
207
+
208
+ # TRY ENABLE SDK and determine whether the extension-part can be used in scratch function
209
+ try:
210
+ import sdk_manager
211
+
212
+ sdk_manager_ctrl = sdk_manager.SDKManager(event_dji_system, socket_ctrl, uart_ctrl)
213
+
214
+ retry_count = 3
215
+ while retry_count > 0:
216
+ retry_count -= 1
217
+ if is_sdk_enable():
218
+ script_ctrl.set_edu_status(True)
219
+ modulesStatus_ctrl.set_edu_status(True)
220
+ sdk_manager_ctrl.enable_plaintext_sdk()
221
+ break
222
+ else:
223
+ time.sleep(1)
224
+ if retry_count <= 0:
225
+ del sdk_manager
226
+ script_ctrl.set_edu_status(False)
227
+ modulesStatus_ctrl.set_edu_status(False)
228
+ except Exception as e:
229
+ logger.fatal(e)
230
+
231
+ socket_ctrl.init()
232
+
233
+ while not G_SCRIPT_FINISH:
234
+ try:
235
+ time.sleep(5)
236
+ except Exception as e:
237
+ logger.fatal(traceback.format_exc())
238
+ G_SCRIPT_FINISH = True
239
+ break
240
+
241
+ script_ctrl.stop()
242
+ event_dji_system.stop()
243
+
244
+ logger.info("DJI SCRATCH EXIT!!!")
s1_SDK/dji_scratch/sdk/plaintext_sdk/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ import protocal_parser
2
+
3
+ PlaintextSDK = protocal_parser.ProtocalParser
s1_SDK/dji_scratch/sdk/plaintext_sdk/protocal_mapping_table.json ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "stream" : {
3
+ "obj": "sdk_ctrl",
4
+ "functions" : {
5
+ "on" : {
6
+ "set" : ["stream_on"],
7
+ "get" : []
8
+ },
9
+ "off" : {
10
+ "set" : ["stream_off"],
11
+ "get" : []
12
+ }
13
+ }
14
+ },
15
+ "audio" : {
16
+ "obj": "sdk_ctrl",
17
+ "functions" : {
18
+ "on" : {
19
+ "set" : ["audio_on"],
20
+ "get" : []
21
+ },
22
+ "off" : {
23
+ "set" : ["audio_off"],
24
+ "get" : []
25
+ }
26
+ }
27
+ },
28
+ "game_msg" : {
29
+ "obj": "sdk_ctrl",
30
+ "functions" : {
31
+ "on" : {
32
+ "set" : ["game_push_on"],
33
+ "get" : []
34
+ },
35
+ "off" : {
36
+ "set" : ["game_push_off"],
37
+ "get" : []
38
+ }
39
+ }
40
+ },
41
+ "robot" : {
42
+ "obj": "robot_ctrl",
43
+ "functions" : {
44
+ "mode" : {
45
+ "set" : ["set_mode", "mode"],
46
+ "get" : ["get_mode"]
47
+ },
48
+ "battery" : {
49
+ "set" : [],
50
+ "get" : ["get_battery_percentage"]
51
+ }
52
+ }
53
+ },
54
+ "chassis" : {
55
+ "obj" : "chassis_ctrl",
56
+ "functions" : {
57
+ "speed" : {
58
+ "set" : ["update_move_speed", "x", "y", "z"],
59
+ "get" : ["get_move_speed"]
60
+ },
61
+ "wheel" : {
62
+ "set" : ["update_wheel_speed", "w2", "w1", "w3", "w4"],
63
+ "get" : ["get_wheel_speed"]
64
+ },
65
+ "move" : {
66
+ "set" : ["update_position_based_on_cur", "x", "y", "z", "vxy", "vz", "wait_for_complete"],
67
+ "get" : []
68
+ },
69
+ "position" : {
70
+ "set" : [],
71
+ "get" : ["get_position"]
72
+ },
73
+ "attitude" : {
74
+ "set" : [],
75
+ "get" : ["get_attitude"]
76
+ },
77
+ "status" : {
78
+ "set" : [],
79
+ "get" : ["get_status"]
80
+ },
81
+ "push" : {
82
+ "set" : ["sdk_info_push_attr_set", "position", "pfreq", "attitude", "afreq", "status", "sfreq", "freq"],
83
+ "get" : []
84
+ },
85
+ "stop" : {
86
+ "set" : ["stop"],
87
+ "get" : []
88
+ }
89
+ }
90
+ },
91
+ "gimbal" : {
92
+ "obj" : "gimbal_ctrl",
93
+ "functions" : {
94
+ "speed" : {
95
+ "set" : ["update_speed", "p", "y"],
96
+ "get" : []
97
+ },
98
+ "move" : {
99
+ "set" : ["update_angle_based_on_cur", "p", "y", "vp", "vy", "wait_for_complete"],
100
+ "get" : []
101
+ },
102
+ "moveto" : {
103
+ "set" : ["update_angle_based_on_origin", "p", "y", "vp", "vy", "wait_for_complete"],
104
+ "get" : []
105
+ },
106
+ "attitude" : {
107
+ "set" : [],
108
+ "get" : ["get_angle"]
109
+ },
110
+ "suspend" : {
111
+ "set" : ["suspend"],
112
+ "get" : []
113
+ },
114
+ "resume" : {
115
+ "set" : ["resume"],
116
+ "get" : []
117
+ },
118
+ "recenter" : {
119
+ "set" : ["recenter"],
120
+ "get" : []
121
+ },
122
+ "push" : {
123
+ "set" : ["sdk_info_push_attr_set", "attitude", "afreq", "freq"],
124
+ "get" : []
125
+ },
126
+ "stop" : {
127
+ "set" : ["stop"],
128
+ "get" : []
129
+ }
130
+ }
131
+ },
132
+ "blaster" : {
133
+ "obj" : "blaster_ctrl",
134
+ "functions" : {
135
+ "bead" : {
136
+ "set" : ["set_fire_count", "counter"],
137
+ "get" : ["get_fire_count"]
138
+ },
139
+ "fire" : {
140
+ "set" : ["fire_once"],
141
+ "get" : []
142
+ }
143
+ }
144
+ },
145
+ "armor" : {
146
+ "obj" : "armor_ctrl",
147
+ "functions" : {
148
+ "sensitivity" : {
149
+ "set" : ["set_hit_sensitivity", "level"],
150
+ "get" : ["get_hit_sensitivity"]
151
+ },
152
+ "event" : {
153
+ "set" : ["sdk_event_push_enable_flag_set", "hit", "reserve"],
154
+ "get" : []
155
+ }
156
+ }
157
+ },
158
+ "sound" : {
159
+ "obj" : "media_ctrl",
160
+ "functions" : {
161
+ "event" : {
162
+ "set" : ["sdk_event_push_enable_flag_set", "applause", "reserve"],
163
+ "get" : []
164
+ }
165
+ }
166
+ },
167
+ "pwm" : {
168
+ "obj" : "chassis_ctrl",
169
+ "functions" : {
170
+ "value" : {
171
+ "set" : ["set_pwm_value", "port", "data"],
172
+ "get" : []
173
+ },
174
+ "freq" : {
175
+ "set" : ["set_pwm_freq", "port", "data"],
176
+ "get" : []
177
+ }
178
+ }
179
+ },
180
+ "sensor_adapter" : {
181
+ "obj" : "sensor_adapter_ctrl",
182
+ "functions" : {
183
+ "adc" : {
184
+ "set" : [],
185
+ "get" : ["get_sensor_adapter_adc", "id", "port"]
186
+ },
187
+ "io_level" : {
188
+ "set" : [],
189
+ "get" : ["get_sensor_adapter_io_level", "id", "port"]
190
+ },
191
+ "pulse_period" : {
192
+ "set" : [],
193
+ "get" : ["get_sensor_adapter_pulse_period", "id", "port"]
194
+ },
195
+ "event" : {
196
+ "set" : ["sdk_event_push_enable_flag_set", "io_level", "reserve"],
197
+ "get" : []
198
+ }
199
+ }
200
+ },
201
+ "ir_distance_sensor" : {
202
+ "obj" : "ir_distance_sensor_ctrl",
203
+ "functions" : {
204
+ "measure" : {
205
+ "set" : ["measure_ctrl", "enable"],
206
+ "get" : []
207
+ },
208
+ "distance" : {
209
+ "set" : [],
210
+ "get" : ["get_distance_info", "id"]
211
+ }
212
+ }
213
+ },
214
+ "servo" : {
215
+ "obj" : "servo_ctrl",
216
+ "functions" : {
217
+ "angle" : {
218
+ "set" : ["set_angle", "id", "angle", "wait_for_complete"],
219
+ "get" : ["get_angle", "id"]
220
+ },
221
+ "speed" : {
222
+ "set" : ["set_speed", "id", "speed"],
223
+ "get" : []
224
+ },
225
+ "recenter" : {
226
+ "set" : ["recenter", "id", "wait_for_complete"],
227
+ "get" : []
228
+ },
229
+ "stop" : {
230
+ "set" : ["stop", "id"],
231
+ "get" : []
232
+ }
233
+ }
234
+ },
235
+ "robotic_arm" : {
236
+ "obj" : "robotic_arm_ctrl",
237
+ "functions" : {
238
+ "move" : {
239
+ "set" : ["move", "x", "y", "wait_for_complete"],
240
+ "get" : []
241
+ },
242
+ "moveto" : {
243
+ "set" : ["moveto", "x", "y", "wait_for_complete"],
244
+ "get" : []
245
+ },
246
+ "position" : {
247
+ "set" : [],
248
+ "get" : ["get_position"]
249
+ },
250
+ "recenter" : {
251
+ "set" : ["recenter", "wait_for_complete"],
252
+ "get" : []
253
+ },
254
+ "stop" : {
255
+ "set" : ["stop"],
256
+ "get" : []
257
+ }
258
+ }
259
+ },
260
+ "robotic_gripper" : {
261
+ "obj" : "gripper_ctrl",
262
+ "functions" : {
263
+ "open" : {
264
+ "set" : ["open", "level"],
265
+ "get" : []
266
+ },
267
+ "close" : {
268
+ "set" : ["close", "level"],
269
+ "get" : []
270
+ },
271
+ "status" : {
272
+ "set" : [],
273
+ "get" : ["get_status"]
274
+ },
275
+ "stop" : {
276
+ "set" : ["stop"],
277
+ "get" : []
278
+ }
279
+ }
280
+ },
281
+ "led" : {
282
+ "obj" : "led_ctrl",
283
+ "functions" : {
284
+ "control" : {
285
+ "set" : ["update_led_t", "comp", "effect", "r", "g", "b", "blink_freq", "single_led_index"],
286
+ "get" : []
287
+ }
288
+ }
289
+ },
290
+ "AI" : {
291
+ "obj" : "AI_ctrl",
292
+ "functions" : {
293
+ "push" : {
294
+ "set" : ["ctrl_detection", "people", "pose", "line", "marker", "robot", "freq"],
295
+ "get" : []
296
+ },
297
+ "attribute" : {
298
+ "set" : ["attr_set", "line_color", "marker_color", "marker_dist"]
299
+ }
300
+ }
301
+ },
302
+ "camera" : {
303
+ "obj" : "media_ctrl",
304
+ "functions" : {
305
+ "exposure" : {
306
+ "set" : ["exposure_value_update_sdk", "ev"],
307
+ "get" : []
308
+ }
309
+ }
310
+ }
311
+ }
s1_SDK/dji_scratch/sdk/plaintext_sdk/protocal_parser.py ADDED
@@ -0,0 +1,855 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import queue
2
+ import threading
3
+ import time
4
+ import json
5
+ import traceback
6
+ import os
7
+ import re
8
+
9
+ import event_client
10
+ import rm_ctrl
11
+ import rm_define
12
+ import rm_log
13
+ import tools
14
+
15
+ import rm_socket
16
+
17
+ logger = rm_log.dji_scratch_logger_get()
18
+
19
+ PROTOCAL_MAPPING_TABLE_PATH = os.path.dirname(__file__) + "/protocal_mapping_table.json"
20
+
21
+ COMMAND_PORT = 40923
22
+ PUSH_PORT = 40924
23
+ EVENT_PORT = 40925
24
+ BROADCAST_PORT = 40926
25
+
26
+ INADDR_ANY = "0.0.0.0"
27
+ WIFI_DIRECT_CONNECTION_IP = "192.168.2.1"
28
+
29
+
30
+ class ProtocalParser(object):
31
+ UART = "uart"
32
+ NETWORK = "network"
33
+
34
+ def __init__(self, event_dji_system, socket_obj, uart_obj):
35
+ self.event_client = event_dji_system
36
+ self.sdk_ctrl = rm_ctrl.SDKCtrl(event_dji_system)
37
+ self.version = ""
38
+
39
+ self.socket_obj = socket_obj
40
+ self.uart_obj = uart_obj
41
+ self.connection_obj = None
42
+
43
+ self.command_socket_fd = None
44
+ self.event_socket_fd = None
45
+ self.push_socket_fd = None
46
+
47
+ self.remote_host_ip = set()
48
+ self.connection_socket_fd = {}
49
+
50
+ self.data_queue = queue.Queue(512)
51
+ self.uart_data_t = ""
52
+ self.socket_data_t = ""
53
+
54
+ # make command exec order
55
+ # if there is command has been execed
56
+ # will return error when user send command
57
+ # support 'command1; command2;' to order run many commands
58
+ self.command_execing_event = threading.Event()
59
+
60
+ self.command_parser_callback = {
61
+ "command": self.command_protocal_format_parser,
62
+ "version": self.version_protocal_format_parser,
63
+ "quit": self.quit_protocal_format_parser,
64
+ }
65
+
66
+ self.data_process_thread = None
67
+
68
+ self.protocal_mapping_table = None
69
+
70
+ self.sdk_mode = False
71
+
72
+ self.ctrl_obj = {}
73
+
74
+ self.report_local_host_ip_timer = None
75
+
76
+ def init(self, config={}):
77
+ self.config = config
78
+
79
+ f = open(PROTOCAL_MAPPING_TABLE_PATH, "r")
80
+ self.protocal_mapping_table = json.load(f)
81
+ f.close()
82
+
83
+ self.command_socket_fd = self.socket_obj.create(
84
+ self.socket_obj.TCP_MODE,
85
+ (INADDR_ANY, COMMAND_PORT),
86
+ server=True,
87
+ recv_msgq_size=8,
88
+ send_msgq_size=8,
89
+ connected_callback=self.__command_connected_callback,
90
+ disconnected_callback=self.__command_disconnected_callback,
91
+ )
92
+ if self.command_socket_fd:
93
+ # TODO: handle the error
94
+ logger.info("command socket create successfully.")
95
+
96
+ self.event_socket_fd = self.socket_obj.create(
97
+ self.socket_obj.TCP_MODE,
98
+ (INADDR_ANY, EVENT_PORT),
99
+ server=True,
100
+ recv_msgq_size=8,
101
+ send_msgq_size=8,
102
+ connected_callback=self.__event_connected_callback,
103
+ )
104
+ if self.event_socket_fd:
105
+ logger.info("event socket create successfully.")
106
+
107
+ self.push_socket_fd = self.socket_obj.create(
108
+ self.socket_obj.UDP_MODE,
109
+ (INADDR_ANY, PUSH_PORT),
110
+ server=False,
111
+ recv_msgq_size=1,
112
+ send_msgq_size=8,
113
+ )
114
+ if self.push_socket_fd:
115
+ logger.info("push socket create successfully.")
116
+
117
+ self.broadcast_socket_fd = self.socket_obj.create(
118
+ self.socket_obj.UDP_MODE,
119
+ (INADDR_ANY, BROADCAST_PORT),
120
+ server=False,
121
+ recv_msgq_size=1,
122
+ send_msgq_size=8,
123
+ )
124
+
125
+ if self.broadcast_socket_fd:
126
+ self.socket_obj.set_udp_default_target_addr(
127
+ self.broadcast_socket_fd, ("<broadcast>", BROADCAST_PORT)
128
+ )
129
+ logger.info("broadcast socket create successfully.")
130
+
131
+ self.ctrl_obj = {}
132
+
133
+ if self.report_local_host_ip_timer == None:
134
+ self.report_local_host_ip_timer = tools.get_timer(
135
+ 2, self.report_local_host_ip
136
+ )
137
+ self.report_local_host_ip_timer.start()
138
+
139
+ self.uart_obj.sdk_process_callback_register(self.__uart_command_recv_callback)
140
+
141
+ def __event_connected_callback(self, fd, new_fd):
142
+ logger.info("New event connected")
143
+ self.socket_obj.update_socket_info(
144
+ new_fd,
145
+ recv_msgq_size=1,
146
+ send_msgq_size=8,
147
+ )
148
+ if fd not in self.connection_socket_fd.keys():
149
+ self.connection_socket_fd[fd] = []
150
+
151
+ self.connection_socket_fd[fd].append(new_fd)
152
+
153
+ def __event_recv_callback(self, fd, data):
154
+ pass
155
+
156
+ def __event_disconnected_callback(self, fd):
157
+ pass
158
+
159
+ def __command_connected_callback(self, fd, new_fd):
160
+ if self.connection_obj == self.uart_obj:
161
+ logger.info("Uart has already connected")
162
+ return
163
+ else:
164
+ logger.info("New command connected")
165
+ self.connection_status_report("connected", fd, new_fd)
166
+ self.socket_obj.update_socket_info(
167
+ new_fd,
168
+ recv_msgq_size=8,
169
+ send_msgq_size=8,
170
+ recv_callback=self.__command_recv_callback,
171
+ )
172
+
173
+ self.remote_host_ip.add(self.socket_obj.get_remote_host_ip(new_fd))
174
+
175
+ if fd not in self.connection_socket_fd.keys():
176
+ self.connection_socket_fd[fd] = []
177
+ self.connection_socket_fd[fd].append(new_fd)
178
+
179
+ def __command_recv_callback(self, fd, data):
180
+ if self.connection_obj == self.uart_obj:
181
+ logger.info("Uart has already connected")
182
+ return
183
+ else:
184
+ self.socket_data_t += data
185
+ if ";" in self.socket_data_t:
186
+ data_list = self.socket_data_t.split(";")
187
+
188
+ # tail symbal is invalid, whether the data is end of ';' or incomplete command, so pop and save it
189
+ self.socket_data_t = data_list.pop(-1)
190
+
191
+ for msg in data_list:
192
+ self.protocal_parser(fd, msg, self.NETWORK)
193
+ else:
194
+ logger.info("Not found ; in data_list, waitting for next data")
195
+ return
196
+
197
+ def __command_disconnected_callback(self, fd):
198
+ self.quit_protocal_format_parser(self.NETWORK, fd, None)
199
+ self.connection_status_report("disconnected", fd, None)
200
+
201
+ def __uart_command_recv_callback(self, data):
202
+ logger.info(data)
203
+ if self.connection_obj == self.socket_obj:
204
+ logger.info("Network has already connected")
205
+ else:
206
+ self.uart_data_t += data
207
+
208
+ if ";" in self.uart_data_t:
209
+ data_list = self.uart_data_t.split(";")
210
+
211
+ # tail symbal is invalid, whether the data is end of ';' or incomplete command, so pop and save it
212
+ self.uart_data_t = data_list.pop(-1)
213
+
214
+ logger.info(data_list)
215
+ for msg in data_list:
216
+ self.protocal_parser(None, msg, self.UART)
217
+ else:
218
+ logger.info("Not found ; in data_list, waitting for next data")
219
+ return
220
+
221
+ def command_execing_start(self):
222
+ self.command_execing_event.set()
223
+
224
+ def command_execing_is_finish(self):
225
+ self.command_execing_event.is_set()
226
+
227
+ def command_execing_finish(self):
228
+ self.command_execing_event.clear()
229
+
230
+ def report_local_host_ip(self):
231
+ ip = self.socket_obj.get_local_host_ip()
232
+ if ip and tools.is_station_mode():
233
+ self.socket_obj.send(self.broadcast_socket_fd, "robot ip %s" % ip)
234
+
235
+ def sdk_robot_ctrl(self, ctrl):
236
+ def init():
237
+ self.ctrl_obj["event"] = event_client.EventClient()
238
+ self.ctrl_obj["modulesStatus_ctrl"] = rm_ctrl.ModulesStatusCtrl(
239
+ self.ctrl_obj["event"]
240
+ )
241
+ self.ctrl_obj["blaster_ctrl"] = rm_ctrl.GunCtrl(self.ctrl_obj["event"])
242
+ self.ctrl_obj["armor_ctrl"] = rm_ctrl.ArmorCtrl(self.ctrl_obj["event"])
243
+ self.ctrl_obj["AI_ctrl"] = rm_ctrl.VisionCtrl(self.ctrl_obj["event"])
244
+ self.ctrl_obj["chassis_ctrl"] = rm_ctrl.ChassisCtrl(self.ctrl_obj["event"])
245
+ self.ctrl_obj["gimbal_ctrl"] = rm_ctrl.GimbalCtrl(self.ctrl_obj["event"])
246
+ self.ctrl_obj["robot_ctrl"] = rm_ctrl.RobotCtrl(
247
+ self.ctrl_obj["event"],
248
+ self.ctrl_obj["chassis_ctrl"],
249
+ self.ctrl_obj["gimbal_ctrl"],
250
+ )
251
+ self.ctrl_obj["led_ctrl"] = rm_ctrl.LedCtrl(self.ctrl_obj["event"])
252
+ self.ctrl_obj["media_ctrl"] = rm_ctrl.MediaCtrl(self.ctrl_obj["event"])
253
+ self.ctrl_obj["mobile_ctrl"] = rm_ctrl.MobileCtrl(self.ctrl_obj["event"])
254
+ self.ctrl_obj["tools"] = rm_ctrl.RobotTools(self.ctrl_obj["event"])
255
+ self.ctrl_obj["sensor_adapter_ctrl"] = rm_ctrl.SensorAdapterCtrl(
256
+ self.ctrl_obj["event"]
257
+ )
258
+ self.ctrl_obj["ir_distance_sensor_ctrl"] = rm_ctrl.IrDistanceSensorCtrl(
259
+ self.ctrl_obj["event"]
260
+ )
261
+ self.ctrl_obj["servo_ctrl"] = rm_ctrl.ServoCtrl(self.ctrl_obj["event"])
262
+ self.ctrl_obj["robotic_arm_ctrl"] = rm_ctrl.RoboticArmCtrl(
263
+ self.ctrl_obj["event"]
264
+ )
265
+ self.ctrl_obj["gripper_ctrl"] = rm_ctrl.RoboticGripperCtrl(
266
+ self.ctrl_obj["event"]
267
+ )
268
+ self.ctrl_obj["sdk_ctrl"] = rm_ctrl.SDKCtrl(self.ctrl_obj["event"])
269
+ # log_ctrl = rm_ctrl.LogCtrl(event)
270
+
271
+ def ready():
272
+ self.ctrl_obj["robot_ctrl"].init()
273
+ self.ctrl_obj["modulesStatus_ctrl"].init()
274
+ self.ctrl_obj["gimbal_ctrl"].init()
275
+ self.ctrl_obj["chassis_ctrl"].init()
276
+ self.ctrl_obj["led_ctrl"].init()
277
+ self.ctrl_obj["blaster_ctrl"].init()
278
+ self.ctrl_obj["mobile_ctrl"].init()
279
+ self.ctrl_obj["servo_ctrl"].init()
280
+ self.ctrl_obj["ir_distance_sensor_ctrl"].init()
281
+ self.ctrl_obj["tools"].init()
282
+
283
+ self.ctrl_obj["robot_ctrl"].enable_sdk_mode()
284
+ self.ctrl_obj["robot_ctrl"].set_mode(rm_define.robot_mode_gimbal_follow)
285
+ self.ctrl_obj["chassis_ctrl"].stop()
286
+ self.ctrl_obj["tools"].program_timer_start()
287
+
288
+ self.ctrl_obj["AI_ctrl"].sdk_info_push_callback_register(
289
+ self.AI_info_push_callback
290
+ )
291
+ self.ctrl_obj["armor_ctrl"].sdk_event_push_callback_register(
292
+ self.armor_event_push_callback
293
+ )
294
+ self.ctrl_obj["media_ctrl"].sdk_event_push_callback_register(
295
+ self.applause_event_push_callback
296
+ )
297
+ self.ctrl_obj["chassis_ctrl"].sdk_info_push_callback_register(
298
+ self.chassis_info_push_callback
299
+ )
300
+ self.ctrl_obj["gimbal_ctrl"].sdk_info_push_callback_register(
301
+ self.gimbal_info_push_callback
302
+ )
303
+ self.ctrl_obj["sensor_adapter_ctrl"].sdk_event_push_callback_register(
304
+ self.io_level_event_push_callback
305
+ )
306
+ self.ctrl_obj["sdk_ctrl"].sdk_info_push_callback_register(
307
+ self.youth_competition_msg_push_callback
308
+ )
309
+
310
+ def stop():
311
+ self.ctrl_obj["blaster_ctrl"].stop()
312
+ self.ctrl_obj["chassis_ctrl"].stop()
313
+ self.ctrl_obj["gimbal_ctrl"].stop()
314
+ self.ctrl_obj["media_ctrl"].stop()
315
+ self.ctrl_obj["AI_ctrl"].stop()
316
+ self.ctrl_obj["armor_ctrl"].stop()
317
+
318
+ def exit():
319
+ stop()
320
+ self.ctrl_obj["robot_ctrl"].disable_sdk_mode()
321
+ self.ctrl_obj["robot_ctrl"].exit()
322
+ self.ctrl_obj["gimbal_ctrl"].exit()
323
+ self.ctrl_obj["chassis_ctrl"].exit()
324
+ self.ctrl_obj["blaster_ctrl"].exit()
325
+ self.ctrl_obj["mobile_ctrl"].exit()
326
+ self.ctrl_obj["armor_ctrl"].exit()
327
+ self.ctrl_obj["media_ctrl"].exit()
328
+ self.ctrl_obj["sdk_ctrl"].exit()
329
+ self.ctrl_obj["ir_distance_sensor_ctrl"].exit()
330
+ self.ctrl_obj["sensor_adapter_ctrl"].exit()
331
+ self.ctrl_obj["servo_ctrl"].exit()
332
+ self.ctrl_obj["gripper_ctrl"].exit()
333
+ self.ctrl_obj["event"].stop()
334
+ self.ctrl_obj.clear()
335
+
336
+ if ctrl == "init":
337
+ init()
338
+ elif ctrl == "ready":
339
+ ready()
340
+ elif ctrl == "stop":
341
+ stop()
342
+ elif ctrl == "exit":
343
+ exit()
344
+
345
+ def __data_process(self):
346
+ self.sdk_robot_ctrl("init")
347
+ self.sdk_robot_ctrl("ready")
348
+
349
+ while self.sdk_mode:
350
+ result = False
351
+ try:
352
+ fd, data = self.data_queue.get(timeout=1)
353
+ except queue.Empty:
354
+ continue
355
+ self.command_execing_start()
356
+ if data.req_type == "set":
357
+ cmd = str(data.obj) + "." + str(data.function) + str(data.param)
358
+
359
+ logger.info(cmd)
360
+
361
+ try:
362
+ result = eval(cmd, self.ctrl_obj)
363
+
364
+ except Exception as e:
365
+ logger.fatal(traceback.format_exc())
366
+ self.ack(fd, "fail", data.seq)
367
+ continue
368
+ if (
369
+ (type(result) == tuple and result[-1] is 0)
370
+ or (type(result) == bool and result == True)
371
+ or result == None
372
+ or result is 0
373
+ ):
374
+ self.ack(fd, "ok", data.seq)
375
+ else:
376
+ self.ack(fd, "fail", data.seq)
377
+ logger.fatal(
378
+ "process : "
379
+ + str(data.obj)
380
+ + "."
381
+ + str(data.function)
382
+ + str(data.param)
383
+ + " exec_result:"
384
+ + str(result)
385
+ )
386
+ elif data.req_type == "get":
387
+ if data.param == None:
388
+ cmd = str(data.obj) + "." + str(data.function) + "()"
389
+ else:
390
+ cmd = str(data.obj) + "." + str(data.function) + str(data.param)
391
+
392
+ logger.info(cmd)
393
+
394
+ try:
395
+ result = eval(cmd, self.ctrl_obj)
396
+
397
+ except Exception as e:
398
+ logger.fatal(traceback.format_exc())
399
+ self.ack(fd, "fail", data.seq)
400
+ seq = data.seq
401
+ data = ""
402
+ if type(result) == tuple or type(result) == list:
403
+ for i in result:
404
+ if type(i) == float:
405
+ data = data + "%.3f" % i + " "
406
+ else:
407
+ data = data + str(i) + " "
408
+ else:
409
+ data = str(result) + " "
410
+ self.ack(fd, data, seq)
411
+ else:
412
+ time.sleep(0.05)
413
+ self.command_execing_finish()
414
+
415
+ self.sdk_robot_ctrl("exit")
416
+
417
+ def protocal_parser(self, fd, data, mode=None):
418
+ # command
419
+ logger.info("Recv string: %s" % (data))
420
+ command = data.split(" ")
421
+
422
+ if len(command) == 0:
423
+ return
424
+
425
+ # find 'seq'
426
+ seq = None
427
+ if "seq" in command:
428
+ seq_pos = command.index("seq")
429
+ if len(command) > seq_pos + 1:
430
+ seq = command[seq_pos + 1]
431
+ if seq.isdigit():
432
+ seq = int(seq)
433
+ elif re.match(r"^0x[0-9a-fA-F]+$", seq):
434
+ seq = int(seq, 16)
435
+ else:
436
+ self.ack(fd, "command format error: seq parse error")
437
+ else:
438
+ self.ack(fd, "command format error: no seq value")
439
+ command = command[0:seq_pos]
440
+
441
+ if self.command_execing_is_finish():
442
+ self.ack(fd, "error", seq)
443
+ return False
444
+
445
+ # check protocal format
446
+ command_obj = command[0]
447
+
448
+ # call process function
449
+ if command_obj in self.command_parser_callback.keys():
450
+ result = self.command_parser_callback[command_obj](mode, fd, seq)
451
+ if result == False or result == None:
452
+ self.ack(fd, "%s exec error" % command_obj, seq)
453
+ elif result == True:
454
+ self.ack(fd, "ok", seq)
455
+ else:
456
+ self.ack(fd, result, seq)
457
+ else:
458
+ if not self.sdk_mode:
459
+ self.ack(fd, "not in sdk mode", seq)
460
+ return False
461
+ result = self.ctrl_protocal_format_parser(command, seq)
462
+ if result == False or result == None:
463
+ self.ack(fd, "command format error: command parse error", seq)
464
+ else:
465
+ if not self.data_queue.full():
466
+ try:
467
+ self.data_queue.put_nowait((fd, result))
468
+ except Exception as e:
469
+ # full ?
470
+ logger.fatal(e)
471
+
472
+ def command_protocal_format_parser(self, mode, fd, seq):
473
+ if self.sdk_mode == False:
474
+ self.sdk_mode = True
475
+ if (
476
+ self.data_process_thread == None
477
+ or self.data_process_thread.is_alive() == False
478
+ ):
479
+ self.data_process_thread = threading.Thread(target=self.__data_process)
480
+ self.data_process_thread.start()
481
+
482
+ if (
483
+ self.report_local_host_ip_timer
484
+ and self.report_local_host_ip_timer.is_start()
485
+ ):
486
+ self.report_local_host_ip_timer.join()
487
+ self.report_local_host_ip_timer.stop()
488
+
489
+ if mode == self.UART:
490
+ self.connection_obj = self.uart_obj
491
+ self.uart_data_t = ""
492
+ elif mode == self.NETWORK:
493
+ self.connection_obj = self.socket_obj
494
+ self.socket_data_t = ""
495
+
496
+ return True
497
+ else:
498
+ return "Already in SDK mode"
499
+
500
+ def version_protocal_format_parser(self, mode, fd, seq):
501
+ if "version" in self.config.keys():
502
+ return "version " + self.config["version"]
503
+
504
+ def quit_protocal_format_parser(self, mode, fd, seq):
505
+ if self.data_process_thread and self.data_process_thread.is_alive():
506
+ if self.report_local_host_ip_timer == None:
507
+ self.report_local_host_ip_timer = tools.get_timer(
508
+ 2, self.connection_obj.report_local_host_ip
509
+ )
510
+ self.report_local_host_ip_timer.start()
511
+ else:
512
+ self.report_local_host_ip_timer.start()
513
+ self.sdk_mode = False
514
+ self.data_process_thread.join()
515
+ self.ack(fd, "ok", seq)
516
+ if mode:
517
+ self.connection_obj = None
518
+ self.socket_data_t = ""
519
+ self.uart_data_t = ""
520
+ return True
521
+ else:
522
+ self.ack(fd, "quit sdk mode failed", seq)
523
+ if mode:
524
+ self.connection_obj = None
525
+ return False
526
+
527
+ def ctrl_protocal_format_parser(self, command, seq):
528
+ cmdpkg = CommandPackage()
529
+ cmdpkg.seq = seq
530
+
531
+ try:
532
+ # get object
533
+ obj = command[0]
534
+ if obj in self.protocal_mapping_table.keys():
535
+ cmdpkg.obj = self.protocal_mapping_table[obj]["obj"]
536
+ else:
537
+ logger.error("obj parse error")
538
+ return False
539
+
540
+ # get function key
541
+ function = command[1]
542
+ if function in self.protocal_mapping_table[obj]["functions"].keys():
543
+ function_dict = self.protocal_mapping_table[obj]["functions"][function]
544
+
545
+ # check if get command
546
+ if "?" in command:
547
+ params_list = command[2:]
548
+ if "?" in params_list:
549
+ params_list.remove("?")
550
+ cmdpkg.function = function_dict["get"][0]
551
+ cmdpkg.req_type = "get"
552
+ params = []
553
+
554
+ """
555
+ if len(function_dict['get'][1:]) != 0 and len(params_list) != 0:
556
+ cmdpkg.param = tuple(params_list[0:len(function_dict['get'][1:])])
557
+ """
558
+
559
+ for param in function_dict["get"][1:]:
560
+ # handle the first param is status bit
561
+ if len(function_dict["get"][1:]) == 1:
562
+ value = None
563
+ if len(params_list) == 0:
564
+ value = None
565
+ elif len(params_list) == 1:
566
+ value = params_list[0]
567
+ elif params_list[0] == function_dict["get"][1:][0]:
568
+ value = params_list[1]
569
+ if value and value.isdigit():
570
+ value = int(value)
571
+ elif re.match(r"^0x[0-9a-fA-F]+$", value):
572
+ value = int(value, 16)
573
+ elif value == "True" or value == "true":
574
+ value = True
575
+ elif value == "False" or value == "false":
576
+ value = False
577
+ else:
578
+ try:
579
+ value = float(value)
580
+ except Exception as e:
581
+ pass
582
+ params.append(value)
583
+ break
584
+
585
+ # check params
586
+ if param in params_list and params_list.index(param) + 1 < len(
587
+ params_list
588
+ ):
589
+ value = params_list[params_list.index(param) + 1]
590
+ if value and value.isdigit():
591
+ value = int(value)
592
+ elif re.match(r"^0x[0-9a-fA-F]+$", value):
593
+ value = int(value, 16)
594
+ elif value == "True" or value == "true":
595
+ value = True
596
+ elif value == "False" or value == "false":
597
+ value = False
598
+ else:
599
+ try:
600
+ value = float(value)
601
+ except Exception as e:
602
+ pass
603
+ params.append(value)
604
+ else:
605
+ params.append(None)
606
+
607
+ cmdpkg.param = tuple(params)
608
+ logger.info(cmdpkg.param)
609
+
610
+ # set command
611
+ else:
612
+ # get params list
613
+ params_list = command[2:]
614
+ cmdpkg.function = function_dict["set"][0]
615
+ cmdpkg.req_type = "set"
616
+ params = []
617
+
618
+ for param in function_dict["set"][1:]:
619
+ # handle the first param is status bit
620
+ if len(function_dict["set"][1:]) == 1:
621
+ value = None
622
+ if len(params_list) == 0:
623
+ value = None
624
+ elif len(params_list) == 1:
625
+ value = params_list[0]
626
+ elif len(params_list) == 2:
627
+ value = params_list[1]
628
+ if value and value.isdigit():
629
+ value = int(value)
630
+ elif value and re.match(r"^0x[0-9a-fA-F]+$", value):
631
+ value = int(value, 16)
632
+ elif value == "True" or value == "true":
633
+ value = True
634
+ elif value == "False" or value == "false":
635
+ value = False
636
+ else:
637
+ try:
638
+ value = float(value)
639
+ except Exception as e:
640
+ pass
641
+ params.append(value)
642
+ break
643
+
644
+ # check params
645
+ if param in params_list and params_list.index(param) + 1 < len(
646
+ params_list
647
+ ):
648
+ value = params_list[params_list.index(param) + 1]
649
+ if value.isdigit():
650
+ value = int(value)
651
+ elif re.match(r"^0x[0-9a-fA-F]+$", value):
652
+ value = int(value, 16)
653
+ elif value == "True" or value == "true":
654
+ value = True
655
+ elif value == "False" or value == "false":
656
+ value = False
657
+ else:
658
+ try:
659
+ value = float(value)
660
+ except Exception as e:
661
+ pass
662
+ params.append(value)
663
+ else:
664
+ params.append(None)
665
+
666
+ cmdpkg.param = tuple(params)
667
+ logger.info(cmdpkg.param)
668
+ else:
669
+ logger.error("function key parse error")
670
+ return False
671
+ except Exception as e:
672
+ logger.fatal(traceback.format_exc())
673
+ return False
674
+
675
+ return cmdpkg
676
+
677
+ def connection_status_report(self, status, fd, data):
678
+ logger.info(
679
+ "connect status changed, local host ip info : %s remote host ip info: %s, cur status: %s"
680
+ % (
681
+ self.socket_obj.get_local_host_ip(data),
682
+ self.socket_obj.get_remote_host_ip(data),
683
+ status,
684
+ )
685
+ )
686
+ mode = "wifi"
687
+ if data != None:
688
+ ip = self.socket_obj.get_local_host_ip(data)
689
+ if ip == tools.get_ip_by_dev_name("wlan0"):
690
+ mode = "wifi"
691
+ elif ip == tools.get_ip_by_dev_name("rndis0"):
692
+ mode = "rndis"
693
+ logger.info("connect mode: %s" % (mode))
694
+
695
+ if status == "connected":
696
+ self.sdk_ctrl.sdk_on(mode)
697
+ elif status == "disconnected":
698
+ self.sdk_ctrl.sdk_off()
699
+
700
+ def armor_event_push_callback(self, event):
701
+ if len(event) == 0:
702
+ return
703
+
704
+ msg = "armor event"
705
+ if "hit" in event.keys():
706
+ msg += " hit %d %d ;" % (event["hit"])
707
+ self.send("event", msg)
708
+
709
+ def applause_event_push_callback(self, event):
710
+ if len(event) == 0:
711
+ return
712
+
713
+ msg = "sound event"
714
+ if "applause" in event.keys():
715
+ msg += " applause %d ;" % (event["applause"])
716
+ self.send("event", msg)
717
+
718
+ def io_level_event_push_callback(self, event):
719
+ if len(event) == 0:
720
+ return
721
+
722
+ msg = "sensor_adapter event"
723
+ if "io_level" in event.keys():
724
+ msg += " io_level %d ;" % (event["io_level"])
725
+ self.send("event", msg)
726
+
727
+ def chassis_position_info_push_callback(self, x, y):
728
+ pass
729
+
730
+ def chassis_info_push_callback(self, info):
731
+ if len(info) == 0:
732
+ return
733
+
734
+ msg = "chassis push"
735
+ if "position" in info.keys():
736
+ msg += " position %.3f %.3f ;" % (info["position"])
737
+ if "attitude" in info.keys():
738
+ msg += " attitude %.3f %.3f %.3f ;" % (info["attitude"])
739
+ if "status" in info.keys():
740
+ msg += " status %d %d %d %d %d %d %d %d %d %d %d ;" % (info["status"])
741
+ self.send("push", msg)
742
+
743
+ def gimbal_info_push_callback(self, info):
744
+ if len(info) == 0:
745
+ return
746
+
747
+ msg = "gimbal push"
748
+ if "attitude" in info.keys():
749
+ msg += " attitude %.3f %.3f ;" % (info["attitude"])
750
+ self.send("push", msg)
751
+
752
+ def AI_info_push_callback(self, info):
753
+ if len(info) == 0:
754
+ return
755
+ msg = "AI push"
756
+ if "people" in info.keys():
757
+ msg += " people %d" % len(info["people"])
758
+ for i in info["people"]:
759
+ msg += " %.3f %.3f %.3f %.3f" % (i.pos.x, i.pos.y, i.size.w, i.size.h)
760
+ if "pose" in info.keys():
761
+ msg += " pose %d" % len(info["pose"])
762
+ for i in info["pose"]:
763
+ msg += " %d %.3f %.3f %.3f %.3f" % (
764
+ i.info,
765
+ i.pos.x,
766
+ i.pos.y,
767
+ i.size.w,
768
+ i.size.h,
769
+ )
770
+ if "marker" in info.keys():
771
+ msg += " marker %d" % len(info["marker"])
772
+ for i in info["marker"]:
773
+ msg += " %d %.3f %.3f %.3f %.3f" % (
774
+ i.info,
775
+ i.pos.x,
776
+ i.pos.y,
777
+ i.size.w,
778
+ i.size.h,
779
+ )
780
+ if "line" in info.keys():
781
+ msg += " line %d" % int(len(info["line"]) / 10)
782
+ for i in info["line"]:
783
+ msg += " %.3f %.3f %.3f %.3f" % (i.pos.x, i.pos.y, i.size.w, i.size.h)
784
+ if "robot" in info.keys():
785
+ msg += " robot %d" % len(info["robot"])
786
+ for i in info["robot"]:
787
+ msg += " %.3f %.3f %.3f %.3f" % (i.pos.x, i.pos.y, i.size.w, i.size.h)
788
+
789
+ self.send("push", msg)
790
+
791
+ def gimbal_status_info_push_callback(self):
792
+ pass
793
+
794
+ def youth_competition_msg_push_callback(self, info):
795
+ if len(info) == 0:
796
+ logger.error("SYS_GAME : msg is none")
797
+ return
798
+ msg = "game msg push "
799
+ if "data" in info["game_msg"].keys():
800
+ msg += str(info["game_msg"]["data"])
801
+ self.send("push", msg)
802
+
803
+ def ack(self, fd, data, seq=None):
804
+ msg = data
805
+ if seq != None:
806
+ msg += " seq %s" % (str(seq))
807
+
808
+ msg += ";"
809
+
810
+ if self.connection_obj:
811
+ self.connection_obj.send(fd, msg)
812
+
813
+ def req(self):
814
+ pass
815
+
816
+ def send(self, obj, data):
817
+ fd = None
818
+
819
+ data += ";"
820
+
821
+ if self.connection_obj == self.uart_obj:
822
+ self.connection_obj.send(None, data)
823
+ else:
824
+ if obj == "command":
825
+ if self.connection_obj:
826
+ return self.connection_obj.send(self.command_socket_fd, data)
827
+ else:
828
+ return None
829
+ elif obj == "event":
830
+ logger.info(self.connection_socket_fd)
831
+ for user_fd in self.connection_socket_fd[self.event_socket_fd]:
832
+ if self.connection_obj:
833
+ self.connection_obj.send(user_fd, data)
834
+ return 0
835
+ elif obj == "push":
836
+ for ip in self.remote_host_ip:
837
+ if self.connection_obj:
838
+ self.connection_obj.send(
839
+ self.push_socket_fd, data, (ip, PUSH_PORT)
840
+ )
841
+ return 0
842
+ else:
843
+ return None
844
+
845
+ def recv(self):
846
+ pass
847
+
848
+
849
+ class CommandPackage(object):
850
+ def __init__(self):
851
+ self.obj = None
852
+ self.function = None
853
+ self.param = None
854
+ self.seq = None
855
+ self.req_type = None
s1_SDK/dji_scratch/sdk/sdk_manager.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import plaintext_sdk
3
+
4
+
5
+ class SDKManager(object):
6
+ def __init__(self, event_client, socket_obj, uart_obj):
7
+ self.plaintext_sdk = plaintext_sdk.PlaintextSDK(
8
+ event_client, socket_obj, uart_obj
9
+ )
10
+ self.plaintext_sdk_config = {}
11
+ self.load_cfg()
12
+
13
+ def load_cfg(self):
14
+ # load version
15
+ cur_dir = os.path.dirname(__file__)
16
+ f = open(cur_dir + "/version.txt")
17
+ version_ori = f.readlines()
18
+ f.close()
19
+
20
+ version = ""
21
+ for i in version_ori:
22
+ version = version + "%.2d." % int(i.split(" ")[-1][0:-1])
23
+
24
+ version = version[0:-1]
25
+
26
+ self.plaintext_sdk_config["version"] = version
27
+
28
+ def enable_plaintext_sdk(self):
29
+ self.plaintext_sdk.init(self.plaintext_sdk_config)
s1_SDK/dji_scratch/sdk/version.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ #SDK_VERSION_MAJOR 0
2
+ #SDK_VERSION_MINOR 0
3
+ #SDK_VERSION_REVISION 00
4
+ #SDK_VERSION_BUILD 70
s1_SDK/patch.sh ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/system/bin/sh
2
+
3
+ # Run adbd for convenience. Not required to get SDK support.
4
+ /system/bin/adb_en.sh &
5
+
6
+ # Stop affected services.
7
+ stop dji_sys
8
+ stop dji_hdvt_uav
9
+ stop dji_vision
10
+
11
+ # Overwrite S1's dji.json with EP's one. Use a bind mount as the file is in the
12
+ # system partition.
13
+ mount -o bind /data/dji.json /system/etc/dji.json
14
+
15
+ # This allows accessing the robot with DJI's binary protocol on port 20020.
16
+ mount -o bind /data/dji_hdvt_uav /system/bin/dji_hdvt_uav
17
+
18
+ # Restart services.
19
+ start dji_sys
20
+ start dji_hdvt_uav
21
+ start dji_vision
22
+
s1_SDK/upload.sh ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ adb shell rm -rf /data/dji_scratch/sdk
4
+ adb push dji_scratch/sdk /data/dji_scratch/.
5
+
6
+ adb push dji_scratch/bin/dji_scratch.py /data/dji_scratch/bin/.
7
+
8
+ adb push dji.json /data/.
9
+
10
+ adb push dji_hdvt_uav /data/.
11
+ adb shell chmod 755 /data/dji_hdvt_uav
12
+
13
+ adb push patch.sh /data/.
14
+